Study on the calibration and optimization of double theodolites baseline
NASA Astrophysics Data System (ADS)
Ma, Jing-yi; Ni, Jin-ping; Wu, Zhi-chao
2018-01-01
For the double theodolites measurement system baseline as the benchmark of the scale of the measurement system and affect the accuracy of the system, this paper puts forward a method for calibration and optimization of the double theodolites baseline. Using double theodolites to measure the known length of the reference ruler, and then reverse the baseline formula. Based on the error propagation law, the analyses show that the baseline error function is an important index to measure the accuracy of the system, and the reference ruler position, posture and so on have an impact on the baseline error. The optimization model is established and the baseline error function is used as the objective function, and optimizes the position and posture of the reference ruler. The simulation results show that the height of the reference ruler has no effect on the baseline error; the posture is not uniform; when the reference ruler is placed at x=500mm and y=1000mm in the measurement space, the baseline error is the smallest. The experimental results show that the experimental results are consistent with the theoretical analyses in the measurement space. In this paper, based on the study of the placement of the reference ruler, for improving the accuracy of the double theodolites measurement system has a reference value.
NASA Astrophysics Data System (ADS)
Gao, X.; Li, T.; Zhang, X.; Geng, X.
2018-04-01
In this paper, we proposed the stochastic model of InSAR height measurement by considering the interferometric geometry of InSAR height measurement. The model directly described the relationship between baseline error and height measurement error. Then the simulation analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of baseline error to height measurement. Furthermore, the whole emulation validation of InSAR stochastic model was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were fully evaluated.
Should Studies of Diabetes Treatment Stratification Correct for Baseline HbA1c?
Jones, Angus G.; Lonergan, Mike; Henley, William E.; Pearson, Ewan R.; Hattersley, Andrew T.; Shields, Beverley M.
2016-01-01
Aims Baseline HbA1c is a major predictor of response to glucose lowering therapy and therefore a potential confounder in studies aiming to identify other predictors. However, baseline adjustment may introduce error if the association between baseline HbA1c and response is substantially due to measurement error and regression to the mean. We aimed to determine whether studies of predictors of response should adjust for baseline HbA1c. Methods We assessed the relationship between baseline HbA1c and glycaemic response in 257 participants treated with GLP-1R agonists and assessed whether it reflected measurement error and regression to the mean using duplicate ‘pre-baseline’ HbA1c measurements not included in the response variable. In this cohort and an additional 2659 participants treated with sulfonylureas we assessed the relationship between covariates associated with baseline HbA1c and treatment response with and without baseline adjustment, and with a bias correction using pre-baseline HbA1c to adjust for the effects of error in baseline HbA1c. Results Baseline HbA1c was a major predictor of response (R2 = 0.19,β = -0.44,p<0.001).The association between pre-baseline and response was similar suggesting the greater response at higher baseline HbA1cs is not mainly due to measurement error and subsequent regression to the mean. In unadjusted analysis in both cohorts, factors associated with baseline HbA1c were associated with response, however these associations were weak or absent after adjustment for baseline HbA1c. Bias correction did not substantially alter associations. Conclusions Adjustment for the baseline HbA1c measurement is a simple and effective way to reduce bias in studies of predictors of response to glucose lowering therapy. PMID:27050911
NASA Technical Reports Server (NTRS)
Kuehn, C. E.; Himwich, W. E.; Clark, T. A.; Ma, C.
1991-01-01
The internal consistency of the baseline-length measurements derived from analysis of several independent VLBI experiments is an estimate of the measurement precision. The paper investigates whether the inclusion of water vapor radiometer (WVR) data as an absolute calibration of the propagation delay due to water vapor improves the precision of VLBI baseline-length measurements. The paper analyzes 28 International Radio Interferometric Surveying runs between June 1988 and January 1989; WVR measurements were made during each session. The addition of WVR data decreased the scatter of the length measurements of the baselines by 5-10 percent. The observed reduction in the scatter of the baseline lengths is less than what is expected from the behavior of the formal errors, which suggest that the baseline-length measurement precision should improve 10-20 percent if WVR data are included in the analysis. The discrepancy between the formal errors and the baseline-length results can be explained as the consequence of systematic errors in the dry-mapping function parameters, instrumental biases in the WVR and the barometer, or both.
Accuracy assessment of high-rate GPS measurements for seismology
NASA Astrophysics Data System (ADS)
Elosegui, P.; Davis, J. L.; Ekström, G.
2007-12-01
Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.
Accounting for baseline differences and measurement error in the analysis of change over time.
Braun, Julia; Held, Leonhard; Ledergerber, Bruno
2014-01-15
If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy. Copyright © 2013 John Wiley & Sons, Ltd.
The effect of the dynamic wet troposphere on VLBI measurements
NASA Technical Reports Server (NTRS)
Treuhaft, R. N.; Lanyi, G. E.
1986-01-01
Calculations using a statistical model of water vapor fluctuations yield the effect of the dynamic wet troposphere on Very Long Baseline Interferometry (VLBI) measurements. The statistical model arises from two primary assumptions: (1) the spatial structure of refractivity fluctuations can be closely approximated by elementary (Kolmogorov) turbulence theory, and (2) temporal fluctuations are caused by spatial patterns which are moved over a site by the wind. The consequences of these assumptions are outlined for the VLBI delay and delay rate observables. For example, wet troposphere induced rms delays for Deep Space Network (DSN) VLBI at 20-deg elevation are about 3 cm of delay per observation, which is smaller, on the average, than other known error sources in the current DSN VLBI data set. At 20-deg elevation for 200-s time intervals, water vapor induces approximately 1.5 x 10 to the minus 13th power s/s in the Allan standard deviation of interferometric delay, which is a measure of the delay rate observable error. In contrast to the delay error, the delay rate measurement error is dominated by water vapor fluctuations. Water vapor induced VLBI parameter errors and correlations are calculated. For the DSN, baseline length parameter errors due to water vapor fluctuations are in the range of 3 to 5 cm. The above physical assumptions also lead to a method for including the water vapor fluctuations in the parameter estimation procedure, which is used to extract baseline and source information from the VLBI observables.
Modeling Nonlinear Errors in Surface Electromyography Due To Baseline Noise: A New Methodology
Law, Laura Frey; Krishnan, Chandramouli; Avin, Keith
2010-01-01
The surface electromyographic (EMG) signal is often contaminated by some degree of baseline noise. It is customary for scientists to subtract baseline noise from the measured EMG signal prior to further analyses based on the assumption that baseline noise adds linearly to the observed EMG signal. The stochastic nature of both the baseline and EMG signal, however, may invalidate this assumption. Alternately, “true” EMG signals may be either minimally or nonlinearly affected by baseline noise. This information is particularly relevant at low contraction intensities when signal-to-noise ratios (SNR) may be lowest. Thus, the purpose of this simulation study was to investigate the influence of varying levels of baseline noise (approximately 2 – 40 % maximum EMG amplitude) on mean EMG burst amplitude and to assess the best means to account for signal noise. The simulations indicated baseline noise had minimal effects on mean EMG activity for maximum contractions, but increased nonlinearly with increasing noise levels and decreasing signal amplitudes. Thus, the simple baseline noise subtraction resulted in substantial error when estimating mean activity during low intensity EMG bursts. Conversely, correcting EMG signal as a nonlinear function of both baseline and measured signal amplitude provided highly accurate estimates of EMG amplitude. This novel nonlinear error modeling approach has potential implications for EMG signal processing, particularly when assessing co-activation of antagonist muscles or small amplitude contractions where the SNR can be low. PMID:20869716
Virtual tape measure for the operating microscope: system specifications and performance evaluation.
Kim, M Y; Drake, J M; Milgram, P
2000-01-01
The Virtual Tape Measure for the Operating Microscope (VTMOM) was created to assist surgeons in making accurate 3D measurements of anatomical structures seen in the surgical field under the operating microscope. The VTMOM employs augmented reality techniques by combining stereoscopic video images with stereoscopic computer graphics, and functions by relying on an operator's ability to align a 3D graphic pointer, which serves as the end-point of the virtual tape measure, with designated locations on the anatomical structure being measured. The VTMOM was evaluated for its baseline and application performances as well as its application efficacy. Baseline performance was determined by measuring the mean error (bias) and standard deviation of error (imprecision) in measurements of non-anatomical objects. Application performance was determined by comparing the error in measuring the dimensions of aneurysm models with and without the VTMOM. Application efficacy was determined by comparing the error in selecting the appropriate aneurysm clip size with and without the VTMOM. Baseline performance indicated a bias of 0.3 mm and an imprecision of 0.6 mm. Application bias was 3.8 mm and imprecision was 2.8 mm for aneurysm diameter. The VTMOM did not improve aneurysm clip size selection accuracy. The VTMOM is a potentially accurate tool for use under the operating microscope. However, its performance when measuring anatomical objects is highly dependent on complex visual features of the object surfaces. Copyright 2000 Wiley-Liss, Inc.
Geodetic positioning using a global positioning system of satellites
NASA Technical Reports Server (NTRS)
Fell, P. J.
1980-01-01
Geodetic positioning using range, integrated Doppler, and interferometric observations from a constellation of twenty-four Global Positioning System satellites is analyzed. A summary of the proposals for geodetic positioning and baseline determination is given which includes a description of measurement techniques and comments on rank deficiency and error sources. An analysis of variance comparison of range, Doppler, and interferometric time delay to determine their relative geometric strength for baseline determination is included. An analytic examination to the effect of a priori constraints on positioning using simultaneous observations from two stations is presented. Dynamic point positioning and baseline determination using range and Doppler is examined in detail. Models for the error sources influencing dynamic positioning are developed. Included is a discussion of atomic clock stability, and range and Doppler observation error statistics based on random correlated atomic clock error are derived.
Wong, Aaron L; Shelhamer, Mark
2014-05-01
Adaptive processes are crucial in maintaining the accuracy of body movements and rely on error storage and processing mechanisms. Although classically studied with adaptation paradigms, evidence of these ongoing error-correction mechanisms should also be detectable in other movements. Despite this connection, current adaptation models are challenged when forecasting adaptation ability with measures of baseline behavior. On the other hand, we have previously identified an error-correction process present in a particular form of baseline behavior, the generation of predictive saccades. This process exhibits long-term intertrial correlations that decay gradually (as a power law) and are best characterized with the tools of fractal time series analysis. Since this baseline task and adaptation both involve error storage and processing, we sought to find a link between the intertrial correlations of the error-correction process in predictive saccades and the ability of subjects to alter their saccade amplitudes during an adaptation task. Here we find just such a relationship: the stronger the intertrial correlations during prediction, the more rapid the acquisition of adaptation. This reinforces the links found previously between prediction and adaptation in motor control and suggests that current adaptation models are inadequate to capture the complete dynamics of these error-correction processes. A better understanding of the similarities in error processing between prediction and adaptation might provide the means to forecast adaptation ability with a baseline task. This would have many potential uses in physical therapy and the general design of paradigms of motor adaptation. Copyright © 2014 the American Physiological Society.
Thomas, Felicity; Signal, Mathew; Harris, Deborah L; Weston, Philip J; Harding, Jane E; Shaw, Geoffrey M; Chase, J Geoffrey
2014-05-01
Neonatal hypoglycemia is common and can cause serious brain injury. Continuous glucose monitoring (CGM) could improve hypoglycemia detection, while reducing blood glucose (BG) measurements. Calibration algorithms use BG measurements to convert sensor signals into CGM data. Thus, inaccuracies in calibration BG measurements directly affect CGM values and any metrics calculated from them. The aim was to quantify the effect of timing delays and calibration BG measurement errors on hypoglycemia metrics in newborn infants. Data from 155 babies were used. Two timing and 3 BG meter error models (Abbott Optium Xceed, Roche Accu-Chek Inform II, Nova Statstrip) were created using empirical data. Monte-Carlo methods were employed, and each simulation was run 1000 times. Each set of patient data in each simulation had randomly selected timing and/or measurement error added to BG measurements before CGM data were calibrated. The number of hypoglycemic events, duration of hypoglycemia, and hypoglycemic index were then calculated using the CGM data and compared to baseline values. Timing error alone had little effect on hypoglycemia metrics, but measurement error caused substantial variation. Abbott results underreported the number of hypoglycemic events by up to 8 and Roche overreported by up to 4 where the original number reported was 2. Nova results were closest to baseline. Similar trends were observed in the other hypoglycemia metrics. Errors in blood glucose concentration measurements used for calibration of CGM devices can have a clinically important impact on detection of hypoglycemia. If CGM devices are going to be used for assessing hypoglycemia it is important to understand of the impact of these errors on CGM data. © 2014 Diabetes Technology Society.
Atmospheric refraction effects on baseline error in satellite laser ranging systems
NASA Technical Reports Server (NTRS)
Im, K. E.; Gardner, C. S.
1982-01-01
Because of the mathematical complexities involved in exact analyses of baseline errors, it is not easy to isolate atmospheric refraction effects; however, by making certain simplifying assumptions about the ranging system geometry, relatively simple expressions can be derived which relate the baseline errors directly to the refraction errors. The results indicate that even in the absence of other errors, the baseline error for intercontinental baselines can be more than an order of magnitude larger than the refraction error.
Continuous Glucose Monitoring in Newborn Infants
Thomas, Felicity; Signal, Mathew; Harris, Deborah L.; Weston, Philip J.; Harding, Jane E.; Shaw, Geoffrey M.
2014-01-01
Neonatal hypoglycemia is common and can cause serious brain injury. Continuous glucose monitoring (CGM) could improve hypoglycemia detection, while reducing blood glucose (BG) measurements. Calibration algorithms use BG measurements to convert sensor signals into CGM data. Thus, inaccuracies in calibration BG measurements directly affect CGM values and any metrics calculated from them. The aim was to quantify the effect of timing delays and calibration BG measurement errors on hypoglycemia metrics in newborn infants. Data from 155 babies were used. Two timing and 3 BG meter error models (Abbott Optium Xceed, Roche Accu-Chek Inform II, Nova Statstrip) were created using empirical data. Monte-Carlo methods were employed, and each simulation was run 1000 times. Each set of patient data in each simulation had randomly selected timing and/or measurement error added to BG measurements before CGM data were calibrated. The number of hypoglycemic events, duration of hypoglycemia, and hypoglycemic index were then calculated using the CGM data and compared to baseline values. Timing error alone had little effect on hypoglycemia metrics, but measurement error caused substantial variation. Abbott results underreported the number of hypoglycemic events by up to 8 and Roche overreported by up to 4 where the original number reported was 2. Nova results were closest to baseline. Similar trends were observed in the other hypoglycemia metrics. Errors in blood glucose concentration measurements used for calibration of CGM devices can have a clinically important impact on detection of hypoglycemia. If CGM devices are going to be used for assessing hypoglycemia it is important to understand of the impact of these errors on CGM data. PMID:24876618
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, S.K.; Dixon, T.H.; Freymueller, J.T.
1990-04-01
Geodetic monitoring of subduction of the Nazca and Cocos plates is a goal of the CASA (Central and South America) Global Positioning System (GPS) experiments, and requires measurement of intersite distances (baselines) in excess of 500 km. The major error source in these measurements is the uncertainty in the position of the GPS satellites at the time of observation. A key aspect of the first CASA experiment, CASA Uno, was the initiation of a global network of tracking stations minimize these errors. The authors studied the effect of using various subsets of this global tracking network on long (>100 km)more » baseline estimates in the CASA region. Best results were obtained with a global tracking network consisting of three U.S. fiducial stations, two sites in the southwest pacific and two sites in Europe. Relative to smaller subsets, this global network improved baseline repeatability, resolution of carrier phase cycle ambiguities, and formal errors of the orbit estimates. Describing baseline repeatability for horizontal components as {sigma}=(a{sup 2} + b{sup 2}L{sup 2}){sup 1/2} where L is baseline length, the authors obtained a = 4 and 9 mm and b = 2.8{times}10{sup {minus}8} and 2.3{times}10{sup {minus}8} for north and east components, respectively, on CASA baselines up to 1,000 km in length with this global network.« less
The influence of phonological context on the sound errors of a speaker with Wernicke's aphasia.
Goldmann, R E; Schwartz, M F; Wilshire, C E
2001-09-01
A corpus of phonological errors produced in narrative speech by a Wernicke's aphasic speaker (R.W.B.) was tested for context effects using two new methods for establishing chance baselines. A reliable anticipatory effect was found using the second method, which estimated chance from the distance between phoneme repeats in the speech sample containing the errors. Relative to this baseline, error-source distances were shorter than expected for anticipations, but not perseverations. R.W.B.'s anticipation/perseveration ratio measured intermediate between a nonaphasic error corpus and that of a more severe aphasic speaker (both reported in Schwartz et al., 1994), supporting the view that the anticipatory bias correlates to severity. Finally, R.W.B's anticipations favored word-initial segments, although errors and sources did not consistently share word or syllable position. Copyright 2001 Academic Press.
Baseline estimation in flame's spectra by using neural networks and robust statistics
NASA Astrophysics Data System (ADS)
Garces, Hugo; Arias, Luis; Rojas, Alejandro
2014-09-01
This work presents a baseline estimation method in flame spectra based on artificial intelligence structure as a neural network, combining robust statistics with multivariate analysis to automatically discriminate measured wavelengths belonging to continuous feature for model adaptation, surpassing restriction of measuring target baseline for training. The main contributions of this paper are: to analyze a flame spectra database computing Jolliffe statistics from Principal Components Analysis detecting wavelengths not correlated with most of the measured data corresponding to baseline; to systematically determine the optimal number of neurons in hidden layers based on Akaike's Final Prediction Error; to estimate baseline in full wavelength range sampling measured spectra; and to train an artificial intelligence structure as a Neural Network which allows to generalize the relation between measured and baseline spectra. The main application of our research is to compute total radiation with baseline information, allowing to diagnose combustion process state for optimization in early stages.
Space shuttle navigation analysis. Volume 2: Baseline system navigation
NASA Technical Reports Server (NTRS)
Jones, H. L.; Luders, G.; Matchett, G. A.; Rains, R. G.
1980-01-01
Studies related to the baseline navigation system for the orbiter are presented. The baseline navigation system studies include a covariance analysis of the Inertial Measurement Unit calibration and alignment procedures, postflight IMU error recovery for the approach and landing phases, on-orbit calibration of IMU instrument biases, and a covariance analysis of entry and prelaunch navigation system performance.
Current Status of the Development of a Transportable and Compact VLBI System by NICT and GSI
NASA Technical Reports Server (NTRS)
Ishii, Atsutoshi; Ichikawa, Ryuichi; Takiguchi, Hiroshi; Takefuji, Kazuhiro; Ujihara, Hideki; Koyama, Yasuhiro; Kondo, Tetsuro; Kurihara, Shinobu; Miura, Yuji; Matsuzaka, Shigeru;
2010-01-01
MARBLE (Multiple Antenna Radio-interferometer for Baseline Length Evaluation) is under development by NICT and GSI. The main part of MARBLE is a transportable VLBI system with a compact antenna. The aim of this system is to provide precise baseline length over about 10 km for calibrating baselines. The calibration baselines are used to check and validate surveying instruments such as GPS receiver and EDM (Electro-optical Distance Meter). It is necessary to examine the calibration baselines regularly to keep the quality of the validation. The VLBI technique can examine and evaluate the calibration baselines. On the other hand, the following roles are expected of a compact VLBI antenna in the VLBI2010 project. In order to achieve the challenging measurement precision of VLBI2010, it is well known that it is necessary to deal with the problem of thermal and gravitational deformation of the antenna. One promising approach may be connected-element interferometry between a compact antenna and a VLBI2010 antenna. By measuring repeatedly the baseline between the small stable antenna and the VLBI2010 antenna, the deformation of the primary antenna can be measured and the thermal and gravitational models of the primary antenna will be able to be constructed. We made two prototypes of a transportable and compact VLBI system from 2007 to 2009. We performed VLBI experiments using theses prototypes and got a baseline length between the two prototypes. The formal error of the measured baseline length was 2.7 mm. We expect that the baseline length error will be reduced by using a high-speed A/D sampler.
Flexible, multi-measurement guided wave damage detection under varying temperatures
NASA Astrophysics Data System (ADS)
Douglass, Alexander C. S.; Harley, Joel B.
2018-04-01
Temperature compensation in structural health monitoring helps identify damage in a structure by removing data variations due to environmental conditions, such as temperature. Stretch-based methods are one of the most commonly used temperature compensation methods. To account for variations in temperature, stretch-based methods optimally stretch signals in time to optimally match a measurement to a baseline. All of the data is then compared with the single baseline to determine the presence of damage. Yet, for these methods to be effective, the measurement and the baseline must satisfy the inherent assumptions of the temperature compensation method. In many scenarios, these assumptions are wrong, the methods generate error, and damage detection fails. To improve damage detection, a multi-measurement damage detection method is introduced. By using each measurement in the dataset as a baseline, error caused by imperfect temperature compensation is reduced. The multi-measurement method increases the detection effectiveness of our damage metric, or damage indicator, over time and reduces the presence of additional peaks caused by temperature that could be mistaken for damage. By using many baselines, the variance of the damage indicator is reduced and the effects from damage are amplified. Notably, the multi-measurement improves damage detection over single-measurement methods. This is demonstrated through an increase in the maximum of our damage signature from 0.55 to 0.95 (where large values, up to a maximum of one, represent a statistically significant change in the data due to damage).
Measuring continuous baseline covariate imbalances in clinical trial data
Ciolino, Jody D.; Martin, Renee’ H.; Zhao, Wenle; Hill, Michael D.; Jauch, Edward C.; Palesch, Yuko Y.
2014-01-01
This paper presents and compares several methods of measuring continuous baseline covariate imbalance in clinical trial data. Simulations illustrate that though the t-test is an inappropriate method of assessing continuous baseline covariate imbalance, the test statistic itself is a robust measure in capturing imbalance in continuous covariate distributions. Guidelines to assess effects of imbalance on bias, type I error rate, and power for hypothesis test for treatment effect on continuous outcomes are presented, and the benefit of covariate-adjusted analysis (ANCOVA) is also illustrated. PMID:21865270
Guan, Yongtao; Li, Yehua; Sinha, Rajita
2011-01-01
In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854
Minimal entropy reconstructions of thermal images for emissivity correction
NASA Astrophysics Data System (ADS)
Allred, Lloyd G.
1999-03-01
Low emissivity with corresponding low thermal emission is a problem which has long afflicted infrared thermography. The problem is aggravated by reflected thermal energy which increases as the emissivity decreases, thus reducing the net signal-to-noise ratio, which degrades the resulting temperature reconstructions. Additional errors are introduced from the traditional emissivity-correction approaches, wherein one attempts to correct for emissivity either using thermocouples or using one or more baseline images, collected at known temperatures. These corrections are numerically equivalent to image differencing. Errors in the baseline images are therefore additive, causing the resulting measurement error to either double or triple. The practical application of thermal imagery usually entails coating the objective surface to increase the emissivity to a uniform and repeatable value. While the author recommends that the thermographer still adhere to this practice, he has devised a minimal entropy reconstructions which not only correct for emissivity variations, but also corrects for variations in sensor response, using the baseline images at known temperatures to correct for these values. The minimal energy reconstruction is actually based on a modified Hopfield neural network which finds the resulting image which best explains the observed data and baseline data, having minimal entropy change between adjacent pixels. The autocorrelation of temperatures between adjacent pixels is a feature of most close-up thermal images. A surprising result from transient heating data indicates that the resulting corrected thermal images have less measurement error and are closer to the situational truth than the original data.
Smoothness of In vivo Spectral Baseline Determined by Mean Squared Error
Zhang, Yan; Shen, Jun
2013-01-01
Purpose A nonparametric smooth line is usually added to spectral model to account for background signals in vivo magnetic resonance spectroscopy (MRS). The assumed smoothness of the baseline significantly influences quantitative spectral fitting. In this paper, a method is proposed to minimize baseline influences on estimated spectral parameters. Methods In this paper, the non-parametric baseline function with a given smoothness was treated as a function of spectral parameters. Its uncertainty was measured by root-mean-squared error (RMSE). The proposed method was demonstrated with a simulated spectrum and in vivo spectra of both short echo time (TE) and averaged echo times. The estimated in vivo baselines were compared with the metabolite-nulled spectra, and the LCModel-estimated baselines. The accuracies of estimated baseline and metabolite concentrations were further verified by cross-validation. Results An optimal smoothness condition was found that led to the minimal baseline RMSE. In this condition, the best fit was balanced against minimal baseline influences on metabolite concentration estimates. Conclusion Baseline RMSE can be used to indicate estimated baseline uncertainties and serve as the criterion for determining the baseline smoothness of in vivo MRS. PMID:24259436
Elliott, Amanda F.; McGwin, Gerald; Owsley, Cynthia
2009-01-01
OBJECTIVE To evaluate the effect of vision-enhancing interventions (i.e., cataract surgery or refractive error correction) on physical function and cognitive status in nursing home residents. DESIGN Longitudinal cohort study. SETTING Seventeen nursing homes in Birmingham, AL. PARTICIPANTS A total of 187 English-speaking older adults (>55 years of age). INTERVENTION Participants took part in one of two vision-enhancing interventions: cataract surgery or refractive error correction. Each group was compared against a control group (persons eligible for but who declined cataract surgery, or who received delayed correction of refractive error). MEASUREMENTS Physical function (i.e., ability to perform activities of daily living and mobility) was assessed with a series of self-report and certified nursing assistant ratings at baseline and at 2 months for the refractive error correction group, and at 4 months for the cataract surgery group. The Mini Mental State Exam was also administered. RESULTS No significant differences existed within or between groups from baseline to follow-up on any of the measures of physical function. Mental status scores significantly declined from baseline to follow-up for both the immediate (p= 0.05) and delayed (p< 0.02) refractive error correction groups and for the cataract surgery control group (p= 0.05). CONCLUSION Vision-enhancing interventions did not lead to short-term improvements in physical functioning or cognitive status in this sample of elderly nursing home residents. PMID:19170783
Power Measurement Errors on a Utility Aircraft
NASA Technical Reports Server (NTRS)
Bousman, William G.
2002-01-01
Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.
Lu, Yongtao; Boudiffa, Maya; Dall'Ara, Enrico; Bellantuono, Ilaria; Viceconti, Marco
2015-11-01
In vivo micro-computed tomography (µCT) scanning is an important tool for longitudinal monitoring of the bone adaptation process in animal models. However, the errors associated with the usage of in vivo µCT measurements for the evaluation of bone adaptations remain unclear. The aim of this study was to evaluate the measurement errors using the bone surface distance approach. The right tibiae of eight 14-week-old C57BL/6 J female mice were consecutively scanned four times in an in vivo µCT scanner using a nominal isotropic image voxel size (10.4 µm) and the tibiae were repositioned between each scan. The repeated scan image datasets were aligned to the corresponding baseline (first) scan image dataset using rigid registration and a region of interest was selected in the proximal tibia metaphysis for analysis. The bone surface distances between the repeated and the baseline scan datasets were evaluated. It was found that the average (±standard deviation) median and 95th percentile bone surface distances were 3.10 ± 0.76 µm and 9.58 ± 1.70 µm, respectively. This study indicated that there were inevitable errors associated with the in vivo µCT measurements of bone microarchitecture and these errors should be taken into account for a better interpretation of bone adaptations measured with in vivo µCT. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Sensitivity of Magnetospheric Multi-Scale (MMS) Mission Navigation Accuracy to Major Error Sources
NASA Technical Reports Server (NTRS)
Olson, Corwin; Long, Anne; Car[emter. Russell
2011-01-01
The Magnetospheric Multiscale (MMS) mission consists of four satellites flying in formation in highly elliptical orbits about the Earth, with a primary objective of studying magnetic reconnection. The baseline navigation concept is independent estimation of each spacecraft state using GPS pseudorange measurements referenced to an Ultra Stable Oscillator (USO) with accelerometer measurements included during maneuvers. MMS state estimation is performed onboard each spacecraft using the Goddard Enhanced Onboard Navigation System (GEONS), which is embedded in the Navigator GPS receiver. This paper describes the sensitivity of MMS navigation performance to two major error sources: USO clock errors and thrust acceleration knowledge errors.
Sensitivity of Magnetospheric Multi-Scale (MMS) Mission Naviation Accuracy to Major Error Sources
NASA Technical Reports Server (NTRS)
Olson, Corwin; Long, Anne; Carpenter, J. Russell
2011-01-01
The Magnetospheric Multiscale (MMS) mission consists of four satellites flying in formation in highly elliptical orbits about the Earth, with a primary objective of studying magnetic reconnection. The baseline navigation concept is independent estimation of each spacecraft state using GPS pseudorange measurements referenced to an Ultra Stable Oscillator (USO) with accelerometer measurements included during maneuvers. MMS state estimation is performed onboard each spacecraft using the Goddard Enhanced Onboard Navigation System (GEONS), which is embedded in the Navigator GPS receiver. This paper describes the sensitivity of MMS navigation performance to two major error sources: USO clock errors and thrust acceleration knowledge errors.
Saito, Masahide; Sano, Naoki; Shibata, Yuki; Kuriyama, Kengo; Komiyama, Takafumi; Marino, Kan; Aoki, Shinichi; Ashizawa, Kazunari; Yoshizawa, Kazuya; Onishi, Hiroshi
2018-05-01
The purpose of this study was to compare the MLC error sensitivity of various measurement devices for VMAT pre-treatment quality assurance (QA). This study used four QA devices (Scandidos Delta4, PTW 2D-array, iRT systems IQM, and PTW Farmer chamber). Nine retrospective VMAT plans were used and nine MLC error plans were generated for all nine original VMAT plans. The IQM and Farmer chamber were evaluated using the cumulative signal difference between the baseline and error-induced measurements. In addition, to investigate the sensitivity of the Delta4 device and the 2D-array, global gamma analysis (1%/1, 2%/2, and 3%/3 mm), dose difference (1%, 2%, and 3%) were used between the baseline and error-induced measurements. Some deviations of the MLC error sensitivity for the evaluation metrics and MLC error ranges were observed. For the two ionization devices, the sensitivity of the IQM was significantly better than that of the Farmer chamber (P < 0.01) while both devices had good linearly correlation between the cumulative signal difference and the magnitude of MLC errors. The pass rates decreased as the magnitude of the MLC error increased for both Delta4 and 2D-array. However, the small MLC error for small aperture sizes, such as for lung SBRT, could not be detected using the loosest gamma criteria (3%/3 mm). Our results indicate that DD could be more useful than gamma analysis for daily MLC QA, and that a large-area ionization chamber has a greater advantage for detecting systematic MLC error because of the large sensitive volume, while the other devices could not detect this error for some cases with a small range of MLC error. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Dehydration and Performance on Clinical Concussion Measures in Collegiate Wrestlers
Weber, Amanda Friedline; Mihalik, Jason P.; Register-Mihalik, Johna K.; Mays, Sally; Prentice, William E.; Guskiewicz, Kevin M.
2013-01-01
Context: The effects of dehydration induced by wrestling-related weight-cutting tactics on clinical concussion outcomes, such as neurocognitive function, balance performance, and symptoms, have not been adequately studied. Objective: To evaluate the effects of dehydration on the outcome of clinical concussion measures in National Collegiate Athletic Association Division I collegiate wrestlers. Design: Repeated-measures design. Setting: Clinical research laboratory. Patients or Other Participants: Thirty-two Division I healthy collegiate male wrestlers (age = 20.0 ± 1.4 years; height = 175.0 ± 7.5 cm; baseline mass = 79.2 ± 12.6 kg). Intervention(s): Participants completed preseason concussion baseline testing in early September. Weight and urine samples were also collected at this time. All participants reported to prewrestling practice and postwrestling practice for the same test battery and protocol in mid-October. They had begun practicing weight-cutting tactics a day before prepractice and postpractice testing. Differences between these measures permitted us to evaluate how dehydration and weight-cutting tactics affected concussion measures. Main Outcome Measures: Sport Concussion Assessment Tool 2 (SCAT2), Balance Error Scoring System, Graded Symptom Checklist, and Simple Reaction Time scores. The Simple Reaction Time was measured using the Automated Neuropsychological Assessment Metrics. Results: The SCAT2 measurements were lower at prepractice (P = .002) and postpractice (P < .001) when compared with baseline. The BESS error scores were higher at postpractice when compared with baseline (P = .015). The GSC severity scores were higher at prepractice (P = .011) and postpractice (P < .001) than at baseline and at postpractice when than at prepractice (P = .003). The number of Graded Symptom Checklist symptoms reported was also higher at prepractice (P = .036) and postpractice (P < .001) when compared with baseline, and at postpractice when compared with prepractice (P = .003). Conclusions: Our results suggest that it is important for wrestlers to be evaluated in a euhydrated state to ensure that dehydration is not influencing the outcome of the clinical measures. PMID:23672379
Subnanosecond GPS-based clock synchronization and precision deep-space tracking
NASA Technical Reports Server (NTRS)
Dunn, C. E.; Lichten, S. M.; Jefferson, D. C.; Border, J. S.
1992-01-01
Interferometric spacecraft tracking is accomplished by the Deep Space Network (DSN) by comparing the arrival time of electromagnetic spacecraft signals at ground antennas separated by baselines on the order of 8000 km. Clock synchronization errors within and between DSN stations directly impact the attainable tracking accuracy, with a 0.3-nsec error in clock synchronization resulting in an 11-nrad angular position error. This level of synchronization is currently achieved by observing a quasar which is angularly close to the spacecraft just after the spacecraft observations. By determining the differential arrival times of the random quasar signal at the stations, clock offsets and propagation delays within the atmosphere and within the DSN stations are calibrated. Recent developments in time transfer techniques may allow medium accuracy (50-100 nrad) spacecraft tracking without near-simultaneous quasar-based calibrations. Solutions are presented for a worldwide network of Global Positioning System (GPS) receivers in which the formal errors for DSN clock offset parameters are less than 0.5 nsec. Comparisons of clock rate offsets derived from GPS measurements and from very long baseline interferometry (VLBI), as well as the examination of clock closure, suggest that these formal errors are a realistic measure of GPS-based clock offset precision and accuracy. Incorporating GPS-based clock synchronization measurements into a spacecraft differential ranging system would allow tracking without near-simultaneous quasar observations. The impact on individual spacecraft navigation-error sources due to elimination of quasar-based calibrations is presented. System implementation, including calibration of station electronic delays, is discussed.
Sub-nanosecond clock synchronization and precision deep space tracking
NASA Technical Reports Server (NTRS)
Dunn, Charles; Lichten, Stephen; Jefferson, David; Border, James S.
1992-01-01
Interferometric spacecraft tracking is accomplished at the NASA Deep Space Network (DSN) by comparing the arrival time of electromagnetic spacecraft signals to ground antennas separated by baselines on the order of 8000 km. Clock synchronization errors within and between DSN stations directly impact the attainable tracking accuracy, with a 0.3 ns error in clock synchronization resulting in an 11 nrad angular position error. This level of synchronization is currently achieved by observing a quasar which is angularly close to the spacecraft just after the spacecraft observations. By determining the differential arrival times of the random quasar signal at the stations, clock synchronization and propagation delays within the atmosphere and within the DSN stations are calibrated. Recent developments in time transfer techniques may allow medium accuracy (50-100 nrad) spacecraft observations without near-simultaneous quasar-based calibrations. Solutions are presented for a global network of GPS receivers in which the formal errors in clock offset parameters are less than 0.5 ns. Comparisons of clock rate offsets derived from GPS measurements and from very long baseline interferometry and the examination of clock closure suggest that these formal errors are a realistic measure of GPS-based clock offset precision and accuracy. Incorporating GPS-based clock synchronization measurements into a spacecraft differential ranging system would allow tracking without near-simultaneous quasar observations. The impact on individual spacecraft navigation error sources due to elimination of quasar-based calibrations is presented. System implementation, including calibration of station electronic delays, is discussed.
Dehydration and performance on clinical concussion measures in collegiate wrestlers.
Weber, Amanda Friedline; Mihalik, Jason P; Register-Mihalik, Johna K; Mays, Sally; Prentice, William E; Guskiewicz, Kevin M
2013-01-01
The effects of dehydration induced by wrestling-related weight-cutting tactics on clinical concussion outcomes, such as neurocognitive function, balance performance, and symptoms, have not been adequately studied. To evaluate the effects of dehydration on the outcome of clinical concussion measures in National Collegiate Athletic Association Division I collegiate wrestlers. Repeated-measures design. Clinical research laboratory. Thirty-two Division I healthy collegiate male wrestlers (age = 20.0 ± 1.4 years; height = 175.0 ± 7.5 cm; baseline mass = 79.2 ± 12.6 kg). Participants completed preseason concussion baseline testing in early September. Weight and urine samples were also collected at this time. All participants reported to prewrestling practice and postwrestling practice for the same test battery and protocol in mid-October. They had begun practicing weight-cutting tactics a day before prepractice and postpractice testing. Differences between these measures permitted us to evaluate how dehydration and weight-cutting tactics affected concussion measures. Sport Concussion Assessment Tool 2 (SCAT2), Balance Error Scoring System, Graded Symptom Checklist, and Simple Reaction Time scores. The Simple Reaction Time was measured using the Automated Neuropsychological Assessment Metrics. The SCAT2 measurements were lower at prepractice (P = .002) and postpractice (P < .001) when compared with baseline. The BESS error scores were higher at postpractice when compared with baseline (P = .015). The GSC severity scores were higher at prepractice (P = .011) and postpractice (P < .001) than at baseline and at postpractice when than at prepractice (P = .003). The number of Graded Symptom Checklist symptoms reported was also higher at prepractice (P = .036) and postpractice (P < .001) when compared with baseline, and at postpractice when compared with prepractice (P = .003). Our results suggest that it is important for wrestlers to be evaluated in a euhydrated state to ensure that dehydration is not influencing the outcome of the clinical measures.
NASA Technical Reports Server (NTRS)
Kuan, Gary M.; Dekens, Frank G.
2006-01-01
The Space Interferometry Mission (SIM) is a microarcsecond interferometric space telescope that requires picometer level precision measurements of its truss and interferometer baselines. Single-gauge metrology errors due to non-ideal physical characteristics of corner cubes reduce the angular measurement capability of the science instrument. Specifically, the non-common vertex error (NCVE) of a shared vertex, double corner cube introduces micrometer level single-gauge errors in addition to errors due to dihedral angles and reflection phase shifts. A modified SIM Kite Testbed containing an articulating double corner cube is modeled and the results are compared to the experimental testbed data. The results confirm modeling capability and viability of calibration techniques.
The Spring 1985 high precision baseline test of the JPL GPS-based geodetic system
NASA Technical Reports Server (NTRS)
Davidson, John M.; Thornton, Catherine L.; Stephens, Scott A.; Blewitt, Geoffrey; Lichten, Stephen M.; Sovers, Ojars J.; Kroger, Peter M.; Skrumeda, Lisa L.; Border, James S.; Neilan, Ruth E.
1987-01-01
The Spring 1985 High Precision Baseline Test (HPBT) was conducted. The HPBT was designed to meet a number of objectives. Foremost among these was the demonstration of a level of accuracy of 1 to 2:10 to the 7th power, or better, for baselines ranging in length up to several hundred kilometers. These objectives were all met with a high degree of success, with respect to the demonstration of system accuracy in particular. The results from six baselines ranging in length from 70 to 729 km were examined for repeatability and, in the case of three baselines, were compared to results from colocated VLBI systems. Repeatability was found to be 5:10 to the 8th power (RMS) for the north baseline coordinate, independent of baseline length, while for the east coordinate RMS repeatability was found to be larger than this by factors of 2 to 4. The GPS-based results were found to be in agreement with those from colocated VLBI measurements, when corrected for the physical separations of the VLBI and CPG antennas, at the level of 1 to 2:10 to the 7th power in all coordinates, independent of baseline length. The results for baseline repeatability are consistent with the current GPA error budget, but the GPS-VLBI intercomparisons disagree at a somewhat larger level than expected. It is hypothesized that these differences may result from errors in the local survey measurements used to correct for the separations of the GPS and VLBI antenna reference centers.
Strauss, Rupert W; Muñoz, Beatriz; Wolfson, Yulia; Sophie, Raafay; Fletcher, Emily; Bittencourt, Millena G; Scholl, Hendrik P N
2016-01-01
Aims To estimate disease progression based on analysis of macular volume measured by spectral-domain optical coherence tomography (SD-OCT) in patients affected by Stargardt macular dystrophy (STGD1) and to evaluate the influence of software errors on these measurements. Methods 58 eyes of 29 STGD1 patients were included. Numbers and types of algorithm errors were recorded and manually corrected. In a subgroup of 36 eyes of 18 patients with at least two examinations over time, total macular volume (TMV) and volumes of all nine Early Treatment of Diabetic Retinopathy Study (ETDRS) subfields were obtained. Random effects models were used to estimate the rate of change per year for the population, and empirical Bayes slopes were used to estimate yearly decline in TMV for individual eyes. Results 6958 single B-scans from 190 macular cube scans were analysed. 2360 (33.9%) showed algorithm errors. Mean observation period for follow-up data was 15 months (range 3–40). The median (IQR) change in TMV using the empirical Bayes estimates for the individual eyes was −0.103 (−0.145, −0.059) mm3 per year. The mean (±SD) TMV was 6.321±1.000 mm3 at baseline, and rate of decline was −0.118 mm3 per year (p=0.003). Yearly mean volume change was −0.004 mm3 in the central subfield (mean baseline=0.128 mm3), −0.032 mm3 in the inner (mean baseline=1.484 mm3) and −0.079 mm3 in the outer ETDRS subfields (mean baseline=5.206 mm3). Conclusions SD-OCT measurements allow monitoring the decline in retinal volume in STGD1; however, they require significant manual correction of software errors. PMID:26568636
Network Adjustment of Orbit Errors in SAR Interferometry
NASA Astrophysics Data System (ADS)
Bahr, Hermann; Hanssen, Ramon
2010-03-01
Orbit errors can induce significant long wavelength error signals in synthetic aperture radar (SAR) interferograms and thus bias estimates of wide-scale deformation phenomena. The presented approach aims for correcting orbit errors in a preprocessing step to deformation analysis by modifying state vectors. Whereas absolute errors in the orbital trajectory are negligible, the influence of relative errors (baseline errors) is parametrised by their parallel and perpendicular component as a linear function of time. As the sensitivity of the interferometric phase is only significant with respect to the perpendicular base-line and the rate of change of the parallel baseline, the algorithm focuses on estimating updates to these two parameters. This is achieved by a least squares approach, where the unwrapped residual interferometric phase is observed and atmospheric contributions are considered to be stochastic with constant mean. To enhance reliability, baseline errors are adjusted in an overdetermined network of interferograms, yielding individual orbit corrections per acquisition.
Estimating error statistics for Chambon-la-Forêt observatory definitive data
NASA Astrophysics Data System (ADS)
Lesur, Vincent; Heumez, Benoît; Telali, Abdelkader; Lalanne, Xavier; Soloviev, Anatoly
2017-08-01
We propose a new algorithm for calibrating definitive observatory data with the goal of providing users with estimates of the data error standard deviations (SDs). The algorithm has been implemented and tested using Chambon-la-Forêt observatory (CLF) data. The calibration process uses all available data. It is set as a large, weakly non-linear, inverse problem that ultimately provides estimates of baseline values in three orthogonal directions, together with their expected standard deviations. For this inverse problem, absolute data error statistics are estimated from two series of absolute measurements made within a day. Similarly, variometer data error statistics are derived by comparing variometer data time series between different pairs of instruments over few years. The comparisons of these time series led us to use an autoregressive process of order 1 (AR1 process) as a prior for the baselines. Therefore the obtained baselines do not vary smoothly in time. They have relatively small SDs, well below 300 pT when absolute data are recorded twice a week - i.e. within the daily to weekly measures recommended by INTERMAGNET. The algorithm was tested against the process traditionally used to derive baselines at CLF observatory, suggesting that statistics are less favourable when this latter process is used. Finally, two sets of definitive data were calibrated using the new algorithm. Their comparison shows that the definitive data SDs are less than 400 pT and may be slightly overestimated by our process: an indication that more work is required to have proper estimates of absolute data error statistics. For magnetic field modelling, the results show that even on isolated sites like CLF observatory, there are very localised signals over a large span of temporal frequencies that can be as large as 1 nT. The SDs reported here encompass signals of a few hundred metres and less than a day wavelengths.
NASA Astrophysics Data System (ADS)
Carter, W. E.; Robertson, D. S.; Nothnagel, A.; Nicolson, G. D.; Schuh, H.
1988-12-01
High-accuracy geodetic very long baseline interferometry measurements between the African, Eurasian, and North American plates have been analyzed to determine the terrestrial coordinates of the Hartebeesthoek observatory to better than 10 cm, to determine the celestial coordinates of eight Southern Hemisphere radio sources with milliarc second (mas) accuracy, and to derive quasi-independent polar motion, UTI, and nutation time series. Comparison of the earth orientation time series with ongoing International Radio Interferometric Surveying project values shows agreement at about the 1 mas of arc level in polar motion and nutation and 0.1 ms of time in UTI. Given the independence of the observing sessions and the unlikeliness of common systematic error sources, this level of agreement serves to bound the total errors in both measurement series.
de Bakker, Chantal M. J.; Altman, Allison R.; Li, Connie; Tribble, Mary Beth; Lott, Carina; Tseng, Wei-Ju; Liu, X. Sherry
2016-01-01
In vivo μCT imaging allows for high-resolution, longitudinal evaluation of bone properties. Based on this technology, several recent studies have developed in vivo dynamic bone histomorphometry techniques that utilize registered μCT images to identify regions of bone formation and resorption, allowing for longitudinal assessment of bone remodeling. However, this analysis requires a direct voxel-by-voxel subtraction between image pairs, necessitating rotation of the images into the same coordinate system, which introduces interpolation errors. We developed a novel image transformation scheme, matched-angle transformation (MAT), whereby the interpolation errors are minimized by equally rotating both the follow-up and baseline images instead of the standard of rotating one image while the other remains fixed. This new method greatly reduced interpolation biases caused by the standard transformation. Additionally, our study evaluated the reproducibility and precision of bone remodeling measurements made via in vivo dynamic bone histomorphometry. Although bone remodeling measurements showed moderate baseline noise, precision was adequate to measure physiologically relevant changes in bone remodeling, and measurements had relatively good reproducibility, with intra-class correlation coefficients of 0.75-0.95. This indicates that, when used in conjunction with MAT, in vivo dynamic histomorphometry provides a reliable assessment of bone remodeling. PMID:26786342
de Bakker, Chantal M J; Altman, Allison R; Li, Connie; Tribble, Mary Beth; Lott, Carina; Tseng, Wei-Ju; Liu, X Sherry
2016-08-01
In vivo µCT imaging allows for high-resolution, longitudinal evaluation of bone properties. Based on this technology, several recent studies have developed in vivo dynamic bone histomorphometry techniques that utilize registered µCT images to identify regions of bone formation and resorption, allowing for longitudinal assessment of bone remodeling. However, this analysis requires a direct voxel-by-voxel subtraction between image pairs, necessitating rotation of the images into the same coordinate system, which introduces interpolation errors. We developed a novel image transformation scheme, matched-angle transformation (MAT), whereby the interpolation errors are minimized by equally rotating both the follow-up and baseline images instead of the standard of rotating one image while the other remains fixed. This new method greatly reduced interpolation biases caused by the standard transformation. Additionally, our study evaluated the reproducibility and precision of bone remodeling measurements made via in vivo dynamic bone histomorphometry. Although bone remodeling measurements showed moderate baseline noise, precision was adequate to measure physiologically relevant changes in bone remodeling, and measurements had relatively good reproducibility, with intra-class correlation coefficients of 0.75-0.95. This indicates that, when used in conjunction with MAT, in vivo dynamic histomorphometry provides a reliable assessment of bone remodeling.
Constraints on Pacific plate kinematics and dynamics with global positioning system measurements
NASA Technical Reports Server (NTRS)
Dixon, T. H.; Golombek, M. P.; Thornton, C. L.
1985-01-01
A measurement program designed to investigate kinematic and dynamic aspects of plate tectonics in the Pacific region by means of satellite observations is proposed. Accuracy studies are summarized showing that for short baselines (less than 100 km), the measuring accuracy of global positioning system (GPS) receivers can be in the centimeter range. For longer baselines, uncertainty in the orbital ephemerides of the GPS satellites could be a major source of error. Simultaneous observations at widely (about 300 km) separated fiducial stations over the Pacific region, should permit an accuracy in the centimeter range for baselines of up to several thousand kilometers. The optimum performance level is based on the assumption of that fiducial baselines are known a priori to the centimeter range. An example fiducial network for a GPS study of the South Pacific region is described.
Very long baseline IPS observations of the solar wind speed in the fast polar streams
NASA Technical Reports Server (NTRS)
Rao, A. Pramesh; Ananthakrishnan, S.; Balasubramanian, V.; Coles, William A.
1995-01-01
Observations of intensity scintillation (IPS) with two or more spaced antennas have been widely used to measure the solar wind velocity. Such methods are particularly valuable in regions which spacecraft have not yet penetrated, but they are also very useful in improving the spatial temporal sampling of the solar wind, even in regions where spacecraft data are available. The principle of the measurement is to measure the time delay tau(sub d) between the scintillations observed with an antenna baseline b. The velocity estimate is just V = b/tau(sub d). The error in estimation of the time delay delta tau(sub d) is independent of the baseline length, thus the error in the velocity estimate delta V given by delta(V)/V approximately equals to (delta tau(sub d))/tau(sub d) is inversely proportional to tau(sub d) and hence to b. However the use of a long baseline b has a less obvious advantage; it provides a means for separating fast and slow contributions when both are present in the scattering region. Here we will present recent observations made using the large cylinder antenna at Ooty in the Nilgiri Hills of South India, and one of the 45 m dishes of GMRT near Pune in West-Central India. The baseline of 900 km is, by a considerable margin, the longest ever used for IPS and provides excellent velocity resolution. These results compared with the ULYSSES observations and other IPS measurements made closer to the sun with higher frequency instruments such as EISCAT and the VLBA will provide a precise measure of the velocity profile of the fast north-polar stream.
Mullan, F; Bartlett, D; Austin, R S
2017-06-01
To investigate the measurement performance of a chromatic confocal profilometer for quantification of surface texture of natural human enamel in vitro. Contributions to the measurement uncertainty from all potential sources of measurement error using a chromatic confocal profilometer and surface metrology software were quantified using a series of surface metrology calibration artifacts and pre-worn enamel samples. The 3D surface texture analysis protocol was optimized across 0.04mm 2 of natural and unpolished enamel undergoing dietary acid erosion (pH 3.2, titratable acidity 41.3mmolOH/L). Flatness deviations due to the x, y stage mechanical movement were the major contribution to the measurement uncertainty; with maximum Sz flatness errors of 0.49μm. Whereas measurement noise; non-linearity's in x, y, z and enamel sample dimensional instability contributed minimal errors. The measurement errors were propagated into an uncertainty budget following a Type B uncertainty evaluation in order to calculate the Standard Combined Uncertainty (u c ), which was ±0.28μm. Statistically significant increases in the median (IQR) roughness (Sa) of the polished samples occurred after 15 (+0.17 (0.13)μm), 30 (+0.12 (0.09)μm) and 45 (+0.18 (0.15)μm) min of erosion (P<0.001 vs. baseline). In contrast, natural unpolished enamel samples revealed a statistically significant decrease in Sa roughness of -0.14 (0.34) μm only after 45min erosion (P<0.05s vs. baseline). The main contribution to measurement uncertainty using chromatic confocal profilometry was from flatness deviations however by optimizing measurement protocols the profilometer successfully characterized surface texture changes in enamel from erosive wear in vitro. Copyright © 2017 The Academy of Dental Materials. All rights reserved.
An analytic technique for statistically modeling random atomic clock errors in estimation
NASA Technical Reports Server (NTRS)
Fell, P. J.
1981-01-01
Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.
Reproducibility of the six-minute walking test in chronic heart failure patients.
Pinna, G D; Opasich, C; Mazza, A; Tangenti, A; Maestri, R; Sanarico, M
2000-11-30
The six-minute walking test (WT) is used in trials and clinical practice as an easy tool to evaluate the functional capacity of chronic heart failure (CHF) patients. As WT measurements are highly variable both between and within individuals, this study aims at assessing the contribution of the different sources of variation and estimating the reproducibility of the test. A statistical model describing WT measurements as a function of fixed and random effects is proposed and its parameters estimated. We considered 202 stable CHF patients who performed two baseline WTs separated by a 30 minute rest; 49 of them repeated the two tests 3 months later (follow-up control). They had no changes in therapy or major clinical events. Another 31 subjects performed two baseline tests separated by 24 hours. Collected data were analysed using a mixed model methodology. There was no significant difference between measurements taken 30 minutes and 24 hours apart (p = 0.99). A trend effect of 17 (1.4) m (mean (SE)) was consistently found between duplicate tests (p < 0.001). REML estimates of variance components were: 5189 (674) for subject differences in the error-free value; 1280 (304) for subject differences in spontaneous clinical evolution between baseline and follow-up control, and 266 (23) for the within-subject error. Hence, the standard error of measurement was 16.3 m, namely 4 per cent of the average WT performance (403 m) in this sample. The intraclass correlation coefficient was 0.96. We conclude that WT measurements are characterized by good intrasubject reproducibility and excellent reliability. When follow-up studies > or = 3 months are performed, unpredictable changes in individual walking performance due to spontaneous clinical evolution are to be expected. Their clinical significance, however, is not known. Copyright 2000 John Wiley & Sons, Ltd.
Hernan, Andrea; Philpot, Benjamin; Janus, Edward D; Dunbar, James A
2012-07-08
Error in self-reported measures of obesity has been frequently described, but the effect of self-reported error on recruitment into diabetes prevention programs is not well established. The aim of this study was to examine the effect of using self-reported obesity data from the Finnish diabetes risk score (FINDRISC) on recruitment into the Greater Green Triangle Diabetes Prevention Project (GGT DPP). The GGT DPP was a structured group-based lifestyle modification program delivered in primary health care settings in South-Eastern Australia. Between 2004-05, 850 FINDRISC forms were collected during recruitment for the GGT DPP. Eligible individuals, at moderate to high risk of developing diabetes, were invited to undertake baseline tests, including anthropometric measurements performed by specially trained nurses. In addition to errors in calculating total risk scores, accuracy of self-reported data (height, weight, waist circumference (WC) and Body Mass Index (BMI)) from FINDRISCs was compared with baseline data, with impact on participation eligibility presented. Overall, calculation errors impacted on eligibility in 18 cases (2.1%). Of n = 279 GGT DPP participants with measured data, errors (total score calculation, BMI or WC) in self-report were found in n = 90 (32.3%). These errors were equally likely to result in under- or over-reported risk. Under-reporting was more common in those reporting lower risk scores (Spearman-rho = -0.226, p-value < 0.001). However, underestimation resulted in only 6% of individuals at high risk of diabetes being incorrectly categorised as moderate or low risk of diabetes. Overall FINDRISC was found to be an effective tool to screen and recruit participants at moderate to high risk of diabetes, accurately categorising levels of overweight and obesity using self-report data. The results could be generalisable to other diabetes prevention programs using screening tools which include self-reported levels of obesity.
Error analysis for the ground-based microwave ozone measurements during STOIC
NASA Technical Reports Server (NTRS)
Connor, Brian J.; Parrish, Alan; Tsou, Jung-Jung; McCormick, M. Patrick
1995-01-01
We present a formal error analysis and characterization of the microwave measurements made during the Stratospheric Ozone Intercomparison Campaign (STOIC). The most important error sources are found to be determination of the tropospheric opacity, the pressure-broadening coefficient of the observed line, and systematic variations in instrument response as a function of frequency ('baseline'). Net precision is 4-6% between 55 and 0.2 mbar, while accuracy is 6-10%. Resolution is 8-10 km below 3 mbar and increases to 17km at 0.2 mbar. We show the 'blind' microwave measurements from STOIC and make limited comparisons to other measurements. We use the averaging kernels of the microwave measurement to eliminate resolution and a priori effects from a comparison to SAGE 2. The STOIC results and comparisons are broadly consistent with the formal analysis.
2014-01-01
Background Monitoring of intracranial pressure (ICP) is a cornerstone in the surveillance of neurosurgical patients. The ICP is measured against a baseline pressure (i.e. zero - or reference pressure). We have previously reported that baseline pressure errors (BPEs), manifested as spontaneous shift or drifts in baseline pressure, cause erroneous readings of mean ICP in individual patients. The objective of this study was to monitor the frequency and severity of BPEs. To this end, we performed a prospective, observational study monitoring the ICP from two separate ICP sensors (Sensors 1 and 2) placed in close proximity in the brain. We characterized BPEs as differences in mean ICP despite near to identical ICP waveform in Sensors 1 and 2. Methods The study enrolled patients with aneurysmal subarachnoid hemorrhage in need of continuous ICP monitoring as part of their intensive care management. The two sensors were placed close to each other in the brain parenchyma via the same burr hole. The monitoring was performed as long as needed from a clinical perspective and the ICP recordings were stored digitally for analysis. For every patient the mean ICP as well as the various ICP wave parameters of the two sensors were compared. Results Sixteen patients were monitored median 164 hours (ranges 70 – 364 hours). Major BPEs, as defined by marked differences in mean ICP despite similar ICP waveform, were seen in 9 of them (56%). The BPEs were of magnitudes that had the potential to alter patient management. Conclusions Baseline Pressure Errors (BPEs) occur in a significant number of patients undergoing continuous ICP monitoring and they may alter patient management. The current practice of measuring ICP against a baseline pressure does not comply with the concept of State of the Art. Monitoring of the ICP waves ought to become the new State of the Art as they are not influenced by BPEs. PMID:24472296
Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data
Zhao, Shanshan
2014-01-01
Summary Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling design. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.
NASA Technical Reports Server (NTRS)
Davis, J. L.; Herring, T. A.; Shapiro, I. I.; Rogers, A. E. E.; Elgered, G.
1985-01-01
Analysis of very long baseline interferometry data indicates that systematic errors in prior estimates of baseline length, of order 5 cm for approximately 8000-km baselines, were due primarily to mismodeling of the electrical path length of the troposphere and mesosphere ('atmospheric delay'). Here observational evidence for the existence of such errors in the previously used models for the atmospheric delay is discussed, and a new 'mapping' function for the elevation angle dependence of this delay is developed. The delay predicted by this new mapping function differs from ray trace results by less than approximately 5 mm, at all elevations down to 5 deg elevation, and introduces errors into the estimates of baseline length of less than about 1 cm, for the multistation intercontinental experiment analyzed here.
NASA Technical Reports Server (NTRS)
Tralli, David M.; Dixon, Timothy H.; Stephens, Scott A.
1988-01-01
Surface Meteorological (SM) and Water Vapor Radiometer (WVR) measurements are used to provide an independent means of calibrating the GPS signal for the wet tropospheric path delay in a study of geodetic baseline measurements in the Gulf of California using GPS in which high tropospheric water vapor content yielded wet path delays in excess of 20 cm at zenith. Residual wet delays at zenith are estimated as constants and as first-order exponentially correlated stochastic processes. Calibration with WVR data is found to yield the best repeatabilities, with improved results possible if combined carrier phase and pseudorange data are used. Although SM measurements can introduce significant errors in baseline solutions if used with a simple atmospheric model and estimation of residual zenith delays as constants, SM calibration and stochastic estimation for residual zenith wet delays may be adequate for precise estimation of GPS baselines. For dry locations, WVRs may not be required to accurately model tropospheric effects on GPS baselines.
A novel variable baseline visibility detection system and its measurement method
NASA Astrophysics Data System (ADS)
Li, Meng; Jiang, Li-hui; Xiong, Xing-long; Zhang, Guizhong; Yao, JianQuan
2017-10-01
As an important meteorological observation instrument, the visibility meter can ensure the safety of traffic operation. However, due to the optical system contamination as well as sample error, the accuracy and stability of the equipment are difficult to meet the requirement in the low-visibility environment. To settle this matter, a novel measurement equipment was designed based upon multiple baseline, which essentially acts as an atmospheric transmission meter with movable optical receiver, applying weighted least square method to process signal. Theoretical analysis and experiments in real atmosphere environment support this technique.
Adverse effects in dual-feed interferometry
NASA Astrophysics Data System (ADS)
Colavita, M. Mark
2009-11-01
Narrow-angle dual-star interferometric astrometry can provide very high accuracy in the presence of the Earth's turbulent atmosphere. However, to exploit the high atmospherically-limited accuracy requires control of systematic errors in measurement of the interferometer baseline, internal OPDs, and fringe phase. In addition, as high photometric SNR is required, care must be taken to maximize throughput and coherence to obtain high accuracy on faint stars. This article reviews the key aspects of the dual-star approach and implementation, the main contributors to the systematic error budget, and the coherence terms in the photometric error budget.
A Dual Frequency Carrier Phase Error Difference Checking Algorithm for the GNSS Compass.
Liu, Shuo; Zhang, Lei; Li, Jian
2016-11-24
The performance of the Global Navigation Satellite System (GNSS) compass is related to the quality of carrier phase measurement. How to process the carrier phase error properly is important to improve the GNSS compass accuracy. In this work, we propose a dual frequency carrier phase error difference checking algorithm for the GNSS compass. The algorithm aims at eliminating large carrier phase error in dual frequency double differenced carrier phase measurement according to the error difference between two frequencies. The advantage of the proposed algorithm is that it does not need additional environment information and has a good performance on multiple large errors compared with previous research. The core of the proposed algorithm is removing the geographical distance from the dual frequency carrier phase measurement, then the carrier phase error is separated and detectable. We generate the Double Differenced Geometry-Free (DDGF) measurement according to the characteristic that the different frequency carrier phase measurements contain the same geometrical distance. Then, we propose the DDGF detection to detect the large carrier phase error difference between two frequencies. The theoretical performance of the proposed DDGF detection is analyzed. An open sky test, a manmade multipath test and an urban vehicle test were carried out to evaluate the performance of the proposed algorithm. The result shows that the proposed DDGF detection is able to detect large error in dual frequency carrier phase measurement by checking the error difference between two frequencies. After the DDGF detection, the accuracy of the baseline vector is improved in the GNSS compass.
Error Analysis for High Resolution Topography with Bi-Static Single-Pass SAR Interferometry
NASA Technical Reports Server (NTRS)
Muellerschoen, Ronald J.; Chen, Curtis W.; Hensley, Scott; Rodriguez, Ernesto
2006-01-01
We present a flow down error analysis from the radar system to topographic height errors for bi-static single pass SAR interferometry for a satellite tandem pair. Because of orbital dynamics the baseline length and baseline orientation evolve spatially and temporally, the height accuracy of the system is modeled as a function of the spacecraft position and ground location. Vector sensitivity equations of height and the planar error components due to metrology, media effects, and radar system errors are derived and evaluated globally for a baseline mission. Included in the model are terrain effects that contribute to layover and shadow and slope effects on height errors. The analysis also accounts for nonoverlapping spectra and the non-overlapping bandwidth due to differences between the two platforms' viewing geometries. The model is applied to a 514 km altitude 97.4 degree inclination tandem satellite mission with a 300 m baseline separation and X-band SAR. Results from our model indicate that global DTED level 3 can be achieved.
NASA Technical Reports Server (NTRS)
Haas, Evan; DeLuccia, Frank
2016-01-01
In evaluating GOES-R Advanced Baseline Imager (ABI) image navigation quality, upsampled sub-images of ABI images are translated against downsampled Landsat 8 images of localized, high contrast earth scenes to determine the translations in the East-West and North-South directions that provide maximum correlation. The native Landsat resolution is much finer than that of ABI, and Landsat navigation accuracy is much better than ABI required navigation accuracy and expected performance. Therefore, Landsat images are considered to provide ground truth for comparison with ABI images, and the translations of ABI sub-images that produce maximum correlation with Landsat localized images are interpreted as ABI navigation errors. The measured local navigation errors from registration of numerous sub-images with the Landsat images are averaged to provide a statistically reliable measurement of the overall navigation error of the ABI image. The dispersion of the local navigation errors is also of great interest, since ABI navigation requirements are specified as bounds on the 99.73rd percentile of the magnitudes of per pixel navigation errors. However, the measurement uncertainty inherent in the use of image registration techniques tends to broaden the dispersion in measured local navigation errors, masking the true navigation performance of the ABI system. We have devised a novel and simple method for estimating the magnitude of the measurement uncertainty in registration error for any pair of images of the same earth scene. We use these measurement uncertainty estimates to filter out the higher quality measurements of local navigation error for inclusion in statistics. In so doing, we substantially reduce the dispersion in measured local navigation errors, thereby better approximating the true navigation performance of the ABI system.
Unit of Measurement Used and Parent Medication Dosing Errors
Dreyer, Benard P.; Ugboaja, Donna C.; Sanchez, Dayana C.; Paul, Ian M.; Moreira, Hannah A.; Rodriguez, Luis; Mendelsohn, Alan L.
2014-01-01
BACKGROUND AND OBJECTIVES: Adopting the milliliter as the preferred unit of measurement has been suggested as a strategy to improve the clarity of medication instructions; teaspoon and tablespoon units may inadvertently endorse nonstandard kitchen spoon use. We examined the association between unit used and parent medication errors and whether nonstandard instruments mediate this relationship. METHODS: Cross-sectional analysis of baseline data from a larger study of provider communication and medication errors. English- or Spanish-speaking parents (n = 287) whose children were prescribed liquid medications in 2 emergency departments were enrolled. Medication error defined as: error in knowledge of prescribed dose, error in observed dose measurement (compared to intended or prescribed dose); >20% deviation threshold for error. Multiple logistic regression performed adjusting for parent age, language, country, race/ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease; site. RESULTS: Medication errors were common: 39.4% of parents made an error in measurement of the intended dose, 41.1% made an error in the prescribed dose. Furthermore, 16.7% used a nonstandard instrument. Compared with parents who used milliliter-only, parents who used teaspoon or tablespoon units had twice the odds of making an error with the intended (42.5% vs 27.6%, P = .02; adjusted odds ratio=2.3; 95% confidence interval, 1.2–4.4) and prescribed (45.1% vs 31.4%, P = .04; adjusted odds ratio=1.9; 95% confidence interval, 1.03–3.5) dose; associations greater for parents with low health literacy and non–English speakers. Nonstandard instrument use partially mediated teaspoon and tablespoon–associated measurement errors. CONCLUSIONS: Findings support a milliliter-only standard to reduce medication errors. PMID:25022742
Unit of measurement used and parent medication dosing errors.
Yin, H Shonna; Dreyer, Benard P; Ugboaja, Donna C; Sanchez, Dayana C; Paul, Ian M; Moreira, Hannah A; Rodriguez, Luis; Mendelsohn, Alan L
2014-08-01
Adopting the milliliter as the preferred unit of measurement has been suggested as a strategy to improve the clarity of medication instructions; teaspoon and tablespoon units may inadvertently endorse nonstandard kitchen spoon use. We examined the association between unit used and parent medication errors and whether nonstandard instruments mediate this relationship. Cross-sectional analysis of baseline data from a larger study of provider communication and medication errors. English- or Spanish-speaking parents (n = 287) whose children were prescribed liquid medications in 2 emergency departments were enrolled. Medication error defined as: error in knowledge of prescribed dose, error in observed dose measurement (compared to intended or prescribed dose); >20% deviation threshold for error. Multiple logistic regression performed adjusting for parent age, language, country, race/ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease; site. Medication errors were common: 39.4% of parents made an error in measurement of the intended dose, 41.1% made an error in the prescribed dose. Furthermore, 16.7% used a nonstandard instrument. Compared with parents who used milliliter-only, parents who used teaspoon or tablespoon units had twice the odds of making an error with the intended (42.5% vs 27.6%, P = .02; adjusted odds ratio=2.3; 95% confidence interval, 1.2-4.4) and prescribed (45.1% vs 31.4%, P = .04; adjusted odds ratio=1.9; 95% confidence interval, 1.03-3.5) dose; associations greater for parents with low health literacy and non-English speakers. Nonstandard instrument use partially mediated teaspoon and tablespoon-associated measurement errors. Findings support a milliliter-only standard to reduce medication errors. Copyright © 2014 by the American Academy of Pediatrics.
Automatic detection of MLC relative position errors for VMAT using the EPID-based picket fence test
NASA Astrophysics Data System (ADS)
Christophides, Damianos; Davies, Alex; Fleckney, Mark
2016-12-01
Multi-leaf collimators (MLCs) ensure the accurate delivery of treatments requiring complex beam fluences like intensity modulated radiotherapy and volumetric modulated arc therapy. The purpose of this work is to automate the detection of MLC relative position errors ⩾0.5 mm using electronic portal imaging device-based picket fence tests and compare the results to the qualitative assessment currently in use. Picket fence tests with and without intentional MLC errors were measured weekly on three Varian linacs. The picket fence images analysed covered a time period ranging between 14-20 months depending on the linac. An algorithm was developed that calculated the MLC error for each leaf-pair present in the picket fence images. The baseline error distributions of each linac were characterised for an initial period of 6 months and compared with the intentional MLC errors using statistical metrics. The distributions of median and one-sample Kolmogorov-Smirnov test p-value exhibited no overlap between baseline and intentional errors and were used retrospectively to automatically detect MLC errors in routine clinical practice. Agreement was found between the MLC errors detected by the automatic method and the fault reports during clinical use, as well as interventions for MLC repair and calibration. In conclusion the method presented provides for full automation of MLC quality assurance, based on individual linac performance characteristics. The use of the automatic method has been shown to provide early warning for MLC errors that resulted in clinical downtime.
Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, A.; Jacobs, C. S.; Ratcliff, J. T.
2012-01-01
The standard VLBI analysis models the distribution of measurement noise as Gaussian. Because the price of recording bits is steadily decreasing, thermal errors will soon no longer dominate. As a result, it is expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become increasingly relevant for optimal analysis. We discuss the advantages of modeling the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen flow assumption pioneered by Treuhaft and Lanyi. We then apply these correlated noise spectra to the weighting of VLBI data analysis for two case studies: X/Ka-band global astrometry and Earth orientation. In both cases we see improved results when the analyses are weighted with correlated noise models vs. the standard uncorrelated models. The X/Ka astrometric scatter improved by approx.10% and the systematic Delta delta vs. delta slope decreased by approx. 50%. The TEMPO Earth orientation results improved by 17% in baseline transverse and 27% in baseline vertical.
NASA Technical Reports Server (NTRS)
Depater, I.
1977-01-01
Observations were made of Jupiter with the Westerbork telescope at all three frequencies available: 610 MHz, 1415 MHz, and 4995 MHz. The raw measurements were corrected for position errors, atmospheric extinction, Faraday rotation, clock, frequency, and baseline errors, and errors due to a shadowing effect. The data was then converted into brightness distribution of the sky by Fourier transformation. Maps of both thermal and nonthermal radiation were developed. Results indicate that the thermal disk of Jupiter measured at a wavelength of 6 cm has a temperature of 236 + or - 15 K. The radiation belts have an overall structure governed by the trapping of electrons in the dipolar field of the planet with significant beaming of the synchrotron radiation into the plane of the magnetic equator.
Lightning Radio Source Retrieval Using Advanced Lightning Direction Finder (ALDF) Networks
NASA Technical Reports Server (NTRS)
Koshak, William J.; Blakeslee, Richard J.; Bailey, J. C.
1998-01-01
A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from an Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing and arrival time of lightning radio emissions. Solutions for the plane (i.e., no Earth curvature) are provided that implement all of tile measurements mentioned above. Tests of the retrieval method are provided using computer-simulated data sets. We also introduce a quadratic planar solution that is useful when only three arrival time measurements are available. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in source location. Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. In the absence of measurement errors, quadratic root degeneracy (no source location ambiguity) is shown to exist exactly on the outer sensor baselines for arbitrary non-collinear network geometries. The accuracy of the quadratic planar method is tested with computer generated data sets. The results are generally better than those obtained from the three station linear planar method when bearing errors are about 2 deg. We also note some of the advantages and disadvantages of these methods over the nonlinear method of chi(sup 2) minimization employed by the National Lightning Detection Network (NLDN) and discussed in Cummins et al.(1993, 1995, 1998).
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.
2000-01-01
A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from an Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions. Solutions for the plane (i.e., no earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated datasets, and the relative influence of bearing and arrival time data an the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA:s Optical Transient Detector and Lightning Imaging Sensor. A quadratic planar solution that is useful when only three arrival time measurements are available is also introduced. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in sc)iirce location, Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated datasets, and the results are generally better than those obtained from the three-station linear planar method when bearing errors are about 2 deg.
NASA Technical Reports Server (NTRS)
Ma, C.
1978-01-01
The causes and effects of diurnal polar motion are described. An algorithm is developed for modeling the effects on very long baseline interferometry observables. Five years of radio-frequency very long baseline interferometry data from stations in Massachusetts, California, and Sweden are analyzed for diurnal polar motion. It is found that the effect is larger than predicted by McClure. Corrections to the standard nutation series caused by the deformability of the earth have a significant effect on the estimated diurnal polar motion scaling factor and the post-fit residual scatter. Simulations of high precision very long baseline interferometry experiments taking into account both measurement uncertainty and modeled errors are described.
NASA Astrophysics Data System (ADS)
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf
2015-05-01
All surveying instruments and their measurements suffer from some errors. To refine the measurement results, it is necessary to use procedures restricting influence of the instrument errors on the measured values or to implement numerical corrections. In precise engineering surveying industrial applications the accuracy of the distances usually realized on relatively short distance is a key parameter limiting the resulting accuracy of the determined values (coordinates, etc.). To determine the size of systematic and random errors of the measured distances were made test with the idea of the suppression of the random error by the averaging of the repeating measurement, and reducing systematic errors influence of by identifying their absolute size on the absolute baseline realized in geodetic laboratory at the Faculty of Civil Engineering CTU in Prague. The 16 concrete pillars with forced centerings were set up and the absolute distances between the points were determined with a standard deviation of 0.02 millimetre using a Leica Absolute Tracker AT401. For any distance measured by the calibrated instruments (up to the length of the testing baseline, i.e. 38.6 m) can now be determined the size of error correction of the distance meter in two ways: Firstly by the interpolation on the raw data, or secondly using correction function derived by previous FFT transformation usage. The quality of this calibration and correction procedure was tested on three instruments (Trimble S6 HP, Topcon GPT-7501, Trimble M3) experimentally using Leica Absolute Tracker AT401. By the correction procedure was the standard deviation of the measured distances reduced significantly to less than 0.6 mm. In case of Topcon GPT-7501 is the nominal standard deviation 2 mm, achieved (without corrections) 2.8 mm and after corrections 0.55 mm; in case of Trimble M3 is nominal standard deviation 3 mm, achieved (without corrections) 1.1 mm and after corrections 0.58 mm; and finally in case of Trimble S6 is nominal standard deviation 1 mm, achieved (without corrections) 1.2 mm and after corrections 0.51 mm. Proposed procedure of the calibration and correction is in our opinion very suitable for increasing of the accuracy of the electronic distance measurement and allows the use of the common surveying instrument to achieve uncommonly high precision.
Status and Prospects for Combined GPS LOD and VLBI UT1 Measurements
NASA Astrophysics Data System (ADS)
Senior, K.; Kouba, J.; Ray, J.
2010-01-01
A Kalman filter was developed to combine VLBI estimates of UT1-TAI with biased length of day (LOD) estimates from GPS. The VLBI results are the analyses of the NASA Goddard Space Flight Center group from 24-hr multi-station observing sessions several times per week and the nearly daily 1-hr single-baseline sessions. Daily GPS LOD estimates from the International GNSS Service (IGS) are combined with the VLBI UT1-TAI by modeling the natural excitation of LOD as the integral of a white noise process (i.e., as a random walk) and the UT1 variations as the integration of LOD, similar to the method described by Morabito et al. (1988). To account for GPS technique errors, which express themselves mostly as temporally correlated biases in the LOD measurements, a Gauss-Markov model has been added to assimilate the IGS data, together with a fortnightly sinusoidal term to capture errors in the IGS treatments of tidal effects. Evaluated against independent atmospheric and oceanic axial angular momentum (AAM + OAM) excitations and compared to other UT1/LOD combinations, ours performs best overall in terms of lowest RMS residual and highest correlation with (AAM + OAM) over sliding intervals down to 3 d. The IERS 05C04 and Bulletin A combinations show strong high-frequency smoothing and other problems. Until modified, the JPL SPACE series suffered in the high frequencies from not including any GPS-based LODs. We find, surprisingly, that further improvements are possible in the Kalman filter combination by selective rejection of some VLBI data. The best combined results are obtained by excluding all the 1-hr single-baseline UT1 data as well as those 24-hr UT1 measurements with formal errors greater than 5 μs (about 18% of the multi-baseline sessions). A rescaling of the VLBI formal errors, rather than rejection, was not an effective strategy. These results suggest that the UT1 errors of the 1-hr and weaker 24-hr VLBI sessions are non-Gaussian and more heterogeneous than expected, possibly due to the diversity of observing geometries used, other neglected systematic effects, or to the much shorter observational averaging interval of the single-baseline sessions. UT1 prediction services could benefit from better handling of VLBI inputs together with proper assimilation of IGS LOD products, including using the Ultra-rapid series that is updated four times daily with 15 hr delay.
Driving Intervention for Returning Combat Veterans.
Classen, Sherrilene; Winter, Sandra; Monahan, Miriam; Yarney, Abraham; Link Lutz, Amanda; Platek, Kyle; Levy, Charles
2017-04-01
Increased crash incidence following deployment and veterans' reports of driving difficulty spurred traffic safety research for this population. We conducted an interim analysis on the efficacy of a simulator-based occupational therapy driving intervention (OT-DI) compared with traffic safety education (TSE) in a randomized controlled trial. During baseline and post-testing, OT-Driver Rehabilitation Specialists and one OT-Certified Driver Rehabilitation Specialist measured driving performance errors on a DriveSafety CDS-250 high-fidelity simulator. The intervention group ( n = 13) received three OT-DI sessions addressing driving errors and visual-search retraining. The control group ( n = 13) received three TSE sessions addressing personal factors and defensive driving. Based on Wilcoxon rank-sum analysis, the OT-DI group's errors were significantly reduced when comparing baseline with Post-Test 1 ( p < .0001) and comparing the OT-DI group with the TSE group at Post-Test 1 ( p = .01). These findings provide support for the efficacy of the OT-DI and set the stage for a future effectiveness study.
Anstey, Kaarin J; Eramudugolla, Ranmalee; Kiely, Kim M; Price, Jasmine
2018-06-01
We evaluated the effectiveness of individually tailored driving lessons compared with a road rules refresher course for improving older driver safety. Two arm parallel randomised controlled trial, involving current drivers aged 65 and older (Mean age 72.0, 47.4% male) residing in Canberra, Australia. The intervention group (n = 28) received a two-hour class-based road rules refresher course, and two one-hour driving lessons tailored to improve poor driving skills and habits identified in a baseline on-road assessment. The control group (n = 29) received the road rules refresher course only. Tests of cognitive performance, and on-road driving were conducted at baseline and at 12-weeks. Main outcome measure was the Driver safety rating (DSR) on the on-road driving test. The number of Critical Errors made during the on-road was also recorded. 55 drivers completed the trial (intervention group: 27, control group: 28). Both groups showed reduction in dangerous/hazardous driver errors that required instructor intervention. From baseline to follow-up there was a greater reduction in the number of critical errors made by the intervention group relative to the control group (IRR = 0.53, SE = 0.1, p = .008). The intervention group improved on the DSR more than the control group (intervention mean change = 1.07 SD = 2.00, control group mean change = 0.32 SD = 1.61). The intervention group had 64% remediation of unsafe driving, where drivers who achieved a score of 'fail' at baseline, 'passed' at follow-up. The control group had 25% remediation. Tailored driving lessons reduced the critical driving errors made by older adults. Longer term follow-up and larger trials are required. Copyright © 2018 Elsevier Ltd. All rights reserved.
Karnon, Jonathan; Campbell, Fiona; Czoski-Murray, Carolyn
2009-04-01
Medication errors can lead to preventable adverse drug events (pADEs) that have significant cost and health implications. Errors often occur at care interfaces, and various interventions have been devised to reduce medication errors at the point of admission to hospital. The aim of this study is to assess the incremental costs and effects [measured as quality adjusted life years (QALYs)] of a range of such interventions for which evidence of effectiveness exists. A previously published medication errors model was adapted to describe the pathway of errors occurring at admission through to the occurrence of pADEs. The baseline model was populated using literature-based values, and then calibrated to observed outputs. Evidence of effects was derived from a systematic review of interventions aimed at preventing medication error at hospital admission. All five interventions, for which evidence of effectiveness was identified, are estimated to be extremely cost-effective when compared with the baseline scenario. Pharmacist-led reconciliation intervention has the highest expected net benefits, and a probability of being cost-effective of over 60% by a QALY value of pound10 000. The medication errors model provides reasonably strong evidence that some form of intervention to improve medicines reconciliation is a cost-effective use of NHS resources. The variation in the reported effectiveness of the few identified studies of medication error interventions illustrates the need for extreme attention to detail in the development of interventions, but also in their evaluation and may justify the primary evaluation of more than one specification of included interventions.
Tracking Progress in Improving Diagnosis: A Framework for Defining Undesirable Diagnostic Events.
Olson, Andrew P J; Graber, Mark L; Singh, Hardeep
2018-01-29
Diagnostic error is a prevalent, harmful, and costly phenomenon. Multiple national health care and governmental organizations have recently identified the need to improve diagnostic safety as a high priority. A major barrier, however, is the lack of standardized, reliable methods for measuring diagnostic safety. Given the absence of reliable and valid measures for diagnostic errors, we need methods to help establish some type of baseline diagnostic performance across health systems, as well as to enable researchers and health systems to determine the impact of interventions for improving the diagnostic process. Multiple approaches have been suggested but none widely adopted. We propose a new framework for identifying "undesirable diagnostic events" (UDEs) that health systems, professional organizations, and researchers could further define and develop to enable standardized measurement and reporting related to diagnostic safety. We propose an outline for UDEs that identifies both conditions prone to diagnostic error and the contexts of care in which these errors are likely to occur. Refinement and adoption of this framework across health systems can facilitate standardized measurement and reporting of diagnostic safety.
Measurement of baseline and orientation between distributed aerospace platforms.
Wang, Wen-Qin
2013-01-01
Distributed platforms play an important role in aerospace remote sensing, radar navigation, and wireless communication applications. However, besides the requirement of high accurate time and frequency synchronization for coherent signal processing, the baseline between the transmitting platform and receiving platform and the orientation of platform towards each other during data recording must be measured in real time. In this paper, we propose an improved pulsed duplex microwave ranging approach, which allows determining the spatial baseline and orientation between distributed aerospace platforms by the proposed high-precision time-interval estimation method. This approach is novel in the sense that it cancels the effect of oscillator frequency synchronization errors due to separate oscillators that are used in the platforms. Several performance specifications are also discussed. The effectiveness of the approach is verified by simulation results.
The Influence of Training Phase on Error of Measurement in Jump Performance.
Taylor, Kristie-Lee; Hopkins, Will G; Chapman, Dale W; Cronin, John B
2016-03-01
The purpose of this study was to calculate the coefficients of variation in jump performance for individual participants in multiple trials over time to determine the extent to which there are real differences in the error of measurement between participants. The effect of training phase on measurement error was also investigated. Six subjects participated in a resistance-training intervention for 12 wk with mean power from a countermovement jump measured 6 d/wk. Using a mixed-model meta-analysis, differences between subjects, within-subject changes between training phases, and the mean error values during different phases of training were examined. Small, substantial factor differences of 1.11 were observed between subjects; however, the finding was unclear based on the width of the confidence limits. The mean error was clearly higher during overload training than baseline training, by a factor of ×/÷ 1.3 (confidence limits 1.0-1.6). The random factor representing the interaction between subjects and training phases revealed further substantial differences of ×/÷ 1.2 (1.1-1.3), indicating that on average, the error of measurement in some subjects changes more than in others when overload training is introduced. The results from this study provide the first indication that within-subject variability in performance is substantially different between training phases and, possibly, different between individuals. The implications of these findings for monitoring individuals and estimating sample size are discussed.
Longitudinal decline of driving safety in Parkinson disease.
Uc, Ergun Y; Rizzo, Matthew; O'Shea, Amy M J; Anderson, Steven W; Dawson, Jeffrey D
2017-11-07
To longitudinally assess and predict on-road driving safety in Parkinson disease (PD). Drivers with PD (n = 67) and healthy controls (n = 110) drove a standardized route in an instrumented vehicle and were invited to return 2 years later. A professional driving expert reviewed drive data and videos to score safety errors. At baseline, drivers with PD performed worse on visual, cognitive, and motor tests, and committed more road safety errors compared to controls (median PD 38.0 vs controls 30.5; p < 0.001). A smaller proportion of drivers with PD returned for repeat testing (42.8% vs 62.7%; p < 0.01). At baseline, returnees with PD made fewer errors than nonreturnees with PD (median 34.5 vs 40.0; p < 0.05) and performed similar to control returnees (median 33). Baseline global cognitive performance of returnees with PD was better than that of nonreturnees with PD, but worse than for control returnees ( p < 0.05). After 2 years, returnees with PD showed greater cognitive decline and larger increase in error counts than control returnees (median increase PD 13.5 vs controls 3.0; p < 0.001). Driving error count increase in the returnees with PD was predicted by greater error count and worse visual acuity at baseline, and by greater interval worsening of global cognition, Unified Parkinson's Disease Rating Scale activities of daily living score, executive functions, visual processing speed, and attention. Despite drop out of the more impaired drivers within the PD cohort, returning drivers with PD, who drove like controls without PD at baseline, showed many more driving safety errors than controls after 2 years. Driving decline in PD was predicted by baseline driving performance and deterioration of cognitive, visual, and functional abnormalities on follow-up. © 2017 American Academy of Neurology.
Chenausky, Karen; Kernbach, Julius; Norton, Andrea; Schlaug, Gottfried
2017-01-01
We investigated the relationship between imaging variables for two language/speech-motor tracts and speech fluency variables in 10 minimally verbal (MV) children with autism. Specifically, we tested whether measures of white matter integrity-fractional anisotropy (FA) of the arcuate fasciculus (AF) and frontal aslant tract (FAT)-were related to change in percent syllable-initial consonants correct, percent items responded to, and percent syllable insertion errors (from best baseline to post 25 treatment sessions). Twenty-three MV children with autism spectrum disorder (ASD) received Auditory-Motor Mapping Training (AMMT), an intonation-based treatment to improve fluency in spoken output, and we report on seven who received a matched control treatment. Ten of the AMMT participants were able to undergo a magnetic resonance imaging study at baseline; their performance on baseline speech production measures is compared to that of the other two groups. No baseline differences were found between groups. A canonical correlation analysis (CCA) relating FA values for left- and right-hemisphere AF and FAT to speech production measures showed that FA of the left AF and right FAT were the largest contributors to the synthetic independent imaging-related variable. Change in percent syllable-initial consonants correct and percent syllable-insertion errors were the largest contributors to the synthetic dependent fluency-related variable. Regression analyses showed that FA values in left AF significantly predicted change in percent syllable-initial consonants correct, no FA variables significantly predicted change in percent items responded to, and FA of right FAT significantly predicted change in percent syllable-insertion errors. Results are consistent with previously identified roles for the AF in mediating bidirectional mapping between articulation and acoustics, and the FAT in its relationship to speech initiation and fluency. They further suggest a division of labor between the hemispheres, implicating the left hemisphere in accuracy of speech production and the right hemisphere in fluency in this population. Changes in response rate are interpreted as stemming from factors other than the integrity of these two fiber tracts. This study is the first to document the existence of a subgroup of MV children who experience increases in syllable- insertion errors as their speech develops in response to therapy.
The Crucial Role of Error Correlation for Uncertainty Modeling of CFD-Based Aerodynamics Increments
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.; Walker, Eric L.
2011-01-01
The Ares I ascent aerodynamics database for Design Cycle 3 (DAC-3) was built from wind-tunnel test results and CFD solutions. The wind tunnel results were used to build the baseline response surfaces for wind-tunnel Reynolds numbers at power-off conditions. The CFD solutions were used to build increments to account for Reynolds number effects. We calculate the validation errors for the primary CFD code results at wind tunnel Reynolds number power-off conditions and would like to be able to use those errors to predict the validation errors for the CFD increments. However, the validation errors are large compared to the increments. We suggest a way forward that is consistent with common practice in wind tunnel testing which is to assume that systematic errors in the measurement process and/or the environment will subtract out when increments are calculated, thus making increments more reliable with smaller uncertainty than absolute values of the aerodynamic coefficients. A similar practice has arisen for the use of CFD to generate aerodynamic database increments. The basis of this practice is the assumption of strong correlation of the systematic errors inherent in each of the results used to generate an increment. The assumption of strong correlation is the inferential link between the observed validation uncertainties at wind-tunnel Reynolds numbers and the uncertainties to be predicted for flight. In this paper, we suggest a way to estimate the correlation coefficient and demonstrate the approach using code-to-code differences that were obtained for quality control purposes during the Ares I CFD campaign. Finally, since we can expect the increments to be relatively small compared to the baseline response surface and to be typically of the order of the baseline uncertainty, we find that it is necessary to be able to show that the correlation coefficients are close to unity to avoid overinflating the overall database uncertainty with the addition of the increments.
Covariance analysis of the airborne laser ranging system
NASA Technical Reports Server (NTRS)
Englar, T. S., Jr.; Hammond, C. L.; Gibbs, B. P.
1981-01-01
The requirements and limitations of employing an airborne laser ranging system for detecting crustal shifts of the Earth within centimeters over a region of approximately 200 by 400 km are presented. The system consists of an aircraft which flies over a grid of ground deployed retroreflectors, making six passes over the grid at two different altitudes. The retroreflector baseline errors are assumed to result from measurement noise, a priori errors on the aircraft and retroreflector positions, tropospheric refraction, and sensor biases.
Accuracy of computerized automatic identification of cephalometric landmarks by a designed software.
Shahidi, Sh; Shahidi, S; Oshagh, M; Gozin, F; Salehi, P; Danaei, S M
2013-01-01
The purpose of this study was to design software for localization of cephalometric landmarks and to evaluate its accuracy in finding landmarks. 40 digital cephalometric radiographs were randomly selected. 16 landmarks which were important in most cephalometric analyses were chosen to be identified. Three expert orthodontists manually identified landmarks twice. The mean of two measurements of each landmark was defined as the baseline landmark. The computer was then able to compare the automatic system's estimate of a landmark with the baseline landmark. The software was designed using Delphi and Matlab programming languages. The techniques were template matching, edge enhancement and some accessory techniques. The total mean error between manually identified and automatically identified landmarks was 2.59 mm. 12.5% of landmarks had mean errors less than 1 mm. 43.75% of landmarks had mean errors less than 2 mm. The mean errors of all landmarks except the anterior nasal spine were less than 4 mm. This software had significant accuracy for localization of cephalometric landmarks and could be used in future applications. It seems that the accuracy obtained with the software which was developed in this study is better than previous automated systems that have used model-based and knowledge-based approaches.
Estimating sizes of faint, distant galaxies in the submillimetre regime
NASA Astrophysics Data System (ADS)
Lindroos, L.; Knudsen, K. K.; Fan, L.; Conway, J.; Coppin, K.; Decarli, R.; Drouart, G.; Hodge, J. A.; Karim, A.; Simpson, J. M.; Wardlow, J.
2016-10-01
We measure the sizes of redshift ˜2 star-forming galaxies by stacking data from the Atacama Large Millimeter/submillimeter Array (ALMA). We use a uv-stacking algorithm in combination with model fitting in the uv-domain and show that this allows for robust measures of the sizes of marginally resolved sources. The analysis is primarily based on the 344 GHz ALMA continuum observations centred on 88 submillimetre galaxies in the LABOCA ECDFS Submillimeter Survey (ALESS). We study several samples of galaxies at z ≈ 2 with M* ≈ 5 × 1010 M⊙, selected using near-infrared photometry (distant red galaxies, extremely red objects, sBzK-galaxies, and galaxies selected on photometric redshift). We find that the typical sizes of these galaxies are ˜0.6 arcsec which corresponds to ˜5 kpc at z = 2, this agrees well with the median sizes measured in the near-infrared z band (˜0.6 arcsec). We find errors on our size estimates of ˜0.1-0.2 arcsec, which agree well with the expected errors for model fitting at the given signal-to-noise ratio. With the uv-coverage of our observations (18-160 m), the size and flux density measurements are sensitive to scales out to 2 arcsec. We compare this to a simulated ALMA Cycle 3 data set with intermediate length baseline coverage, and we find that, using only these baselines, the measured stacked flux density would be an order of magnitude fainter. This highlights the importance of short baselines to recover the full flux density of high-redshift galaxies.
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.
1997-01-01
A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from in Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions and solutions for the plane (i.e.. no Earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated data sets and the relative influence of bearing and arrival time data on the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA's Optical Transient Detector (OTD) and Lightning Imaging System (LIS). We also introduce a quadratic planar solution that is useful when only three arrival time measurements are available. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in source location. Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated data sets and the results are generally better than those obtained from the three station linear planar method when bearing errors are about 2 degrees.
Bitton, Rachel R.; Webb, Taylor D.; Pauly, Kim Butts; Ghanouni, Pejman
2015-01-01
Purpose To investigate thermal dose volume (TDV) and non-perfused volume (NPV) of magnetic resonance-guided focused ultrasound (MRgFUS) treatments in patients with soft tissue tumors, and describe a method for MR thermal dosimetry using a baseline reference. Materials and Methods Agreement between TDV and immediate post treatment NPV was evaluated from MRgFUS treatments of five patients with biopsy-proven desmoid tumors. Thermometry data (gradient echo, 3T) were analyzed over the entire course of the treatments to discern temperature errors in the standard approach. The technique searches previously acquired baseline images for a match using 2D normalized cross-correlation and a weighted mean of phase difference images. Thermal dose maps and TDVs were recalculated using the matched baseline and compared to NPV. Results TDV and NPV showed between 47%–91% disagreement, using the standard immediate baseline method for calculating TDV. Long-term thermometry showed a nonlinear local temperature accrual, where peak additional temperature varied between 4–13°C (mean = 7.8°C) across patients. The prior baseline method could be implemented by finding a previously acquired matching baseline 61% ± 8% (mean ± SD) of the time. We found 7%–42% of the disagreement between TDV and NPV was due to errors in thermometry caused by heat accrual. For all patients, the prior baseline method increased the estimated treatment volume and reduced the discrepancies between TDV and NPV (P = 0.023). Conclusion This study presents a mismatch between in-treatment and post treatment efficacy measures. The prior baseline approach accounts for local heating and improves the accuracy of thermal dose-predicted volume. PMID:26119129
Bitton, Rachel R; Webb, Taylor D; Pauly, Kim Butts; Ghanouni, Pejman
2016-01-01
To investigate thermal dose volume (TDV) and non-perfused volume (NPV) of magnetic resonance-guided focused ultrasound (MRgFUS) treatments in patients with soft tissue tumors, and describe a method for MR thermal dosimetry using a baseline reference. Agreement between TDV and immediate post treatment NPV was evaluated from MRgFUS treatments of five patients with biopsy-proven desmoid tumors. Thermometry data (gradient echo, 3T) were analyzed over the entire course of the treatments to discern temperature errors in the standard approach. The technique searches previously acquired baseline images for a match using 2D normalized cross-correlation and a weighted mean of phase difference images. Thermal dose maps and TDVs were recalculated using the matched baseline and compared to NPV. TDV and NPV showed between 47%-91% disagreement, using the standard immediate baseline method for calculating TDV. Long-term thermometry showed a nonlinear local temperature accrual, where peak additional temperature varied between 4-13°C (mean = 7.8°C) across patients. The prior baseline method could be implemented by finding a previously acquired matching baseline 61% ± 8% (mean ± SD) of the time. We found 7%-42% of the disagreement between TDV and NPV was due to errors in thermometry caused by heat accrual. For all patients, the prior baseline method increased the estimated treatment volume and reduced the discrepancies between TDV and NPV (P = 0.023). This study presents a mismatch between in-treatment and post treatment efficacy measures. The prior baseline approach accounts for local heating and improves the accuracy of thermal dose-predicted volume. © 2015 Wiley Periodicals, Inc.
SMOS: a satellite mission to measure ocean surface salinity
NASA Astrophysics Data System (ADS)
Font, Jordi; Kerr, Yann H.; Srokosz, Meric A.; Etcheto, Jacqueline; Lagerloef, Gary S.; Camps, Adriano; Waldteufel, Philippe
2001-01-01
The ESA's SMOS (Soil Moisture and Ocean Salinity) Earth Explorer Opportunity Mission will be launched by 2005. Its baseline payload is a microwave L-band (21 cm, 1.4 GHz) 2D interferometric radiometer, Y shaped, with three arms 4.5 m long. This frequency allows the measurement of brightness temperature (Tb) under the best conditions to retrieve soil moisture and sea surface salinity (SSS). Unlike other oceanographic variables, until now it has not been possible to measure salinity from space. However, large ocean areas lack significant salinity measurements. The 2D interferometer will measure Tb at large and different incidence angles, for two polarizations. It is possible to obtain SSS from L-band passive microwave measurements if the other factors influencing Tb (SST, surface roughness, foam, sun glint, rain, ionospheric effects and galactic/cosmic background radiation) can be accounted for. Since the radiometric sensitivity is low, SSS cannot be recovered to the required accuracy from a single measurement as the error is about 1-2 psu. If the errors contributing to the uncertainty in Tb are random, averaging the independent data and views along the track, and considering a 200 km square, allow the error to be reduced to 0.1-0.2 pus, assuming all ancillary errors are budgeted.
Zhou, Wen-Jun; Zhang, Yong-Ye; Li, Hua; Wu, Yu-Fei; Xu, Ji; Lv, Sha; Li, Ge; Liu, Shi-Chun; Song, Sheng-Fang
2016-01-01
Background To determine the change in refractive error and the incidence of myopia among school-aged children in the Yongchuan District of Chongqing City, Western China. Methods A population-based cross-sectional survey was initially conducted in 2006 among 3070 children aged 6 to 15 years. A longitudinal follow-up study was then conducted 5 years later between November 2011 and March 2012. Refractive error was measured under cycloplegia with autorefraction. Age, sex, and baseline refractive error were evaluated as risk factors for progression of refractive error and incidence of myopia. Results Longitudinal data were available for 1858 children (60.5%). The cumulative mean change in refractive error was −2.21 (standard deviation [SD], 1.87) diopters (D) for the entire study population, with an annual progression of refraction in a myopic direction of −0.43 D. Myopic progression of refractive error was associated with younger age, female sex, and higher myopic or hyperopic refractive error at baseline. The cumulative incidence of myopia, defined as a spherical equivalent refractive error of −0.50 D or more, among initial emmetropes and hyperopes was 54.9% (95% confidence interval [CI], 45.2%–63.5%), with an annual incidence of 10.6% (95% CI, 8.7%–13.1%). Myopia was found more likely to happen in female and older children. Conclusions In Western China, both myopic progression and incidence of myopia were higher than those of children from most other locations in China and from the European Caucasian population. Compared with a previous study in China, there was a relative increase in annual myopia progression and annual myopia incidence, a finding which is consistent with the increasing trend on prevalence of myopia in China. PMID:26875599
Zhou, Wen-Jun; Zhang, Yong-Ye; Li, Hua; Wu, Yu-Fei; Xu, Ji; Lv, Sha; Li, Ge; Liu, Shi-Chun; Song, Sheng-Fang
2016-07-05
To determine the change in refractive error and the incidence of myopia among school-aged children in the Yongchuan District of Chongqing City, Western China. A population-based cross-sectional survey was initially conducted in 2006 among 3070 children aged 6 to 15 years. A longitudinal follow-up study was then conducted 5 years later between November 2011 and March 2012. Refractive error was measured under cycloplegia with autorefraction. Age, sex, and baseline refractive error were evaluated as risk factors for progression of refractive error and incidence of myopia. Longitudinal data were available for 1858 children (60.5%). The cumulative mean change in refractive error was -2.21 (standard deviation [SD], 1.87) diopters (D) for the entire study population, with an annual progression of refraction in a myopic direction of -0.43 D. Myopic progression of refractive error was associated with younger age, female sex, and higher myopic or hyperopic refractive error at baseline. The cumulative incidence of myopia, defined as a spherical equivalent refractive error of -0.50 D or more, among initial emmetropes and hyperopes was 54.9% (95% confidence interval [CI], 45.2%-63.5%), with an annual incidence of 10.6% (95% CI, 8.7%-13.1%). Myopia was found more likely to happen in female and older children. In Western China, both myopic progression and incidence of myopia were higher than those of children from most other locations in China and from the European Caucasian population. Compared with a previous study in China, there was a relative increase in annual myopia progression and annual myopia incidence, a finding which is consistent with the increasing trend on prevalence of myopia in China.
Classification based upon gene expression data: bias and precision of error rates.
Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L
2007-06-01
Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp
NASA Technical Reports Server (NTRS)
Harvie, E.; Filla, O.; Baker, D.
1993-01-01
Analysis performed in the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) measures error in the static Earth sensor onboard the National Oceanic and Atmospheric Administration (NOAA)-10 spacecraft using flight data. Errors are computed as the difference between Earth sensor pitch and roll angle telemetry and reference pitch and roll attitude histories propagated by gyros. The flight data error determination illustrates the effect on horizon sensing of systemic variation in the Earth infrared (IR) horizon radiance with latitude and season, as well as the effect of anomalies in the global IR radiance. Results of the analysis provide a comparison between static Earth sensor flight performance and that of scanning Earth sensors studied previously in the GSFC/FDD. The results also provide a baseline for evaluating various models of the static Earth sensor. Representative days from the NOAA-10 mission indicate the extent of uniformity and consistency over time of the global IR horizon. A unique aspect of the NOAA-10 analysis is the correlation of flight data errors with independent radiometric measurements of stratospheric temperature. The determination of the NOAA-10 static Earth sensor error contributes to realistic performance expectations for missions to be equipped with similar sensors.
NASA Astrophysics Data System (ADS)
Shankar, Praveen
The performance of nonlinear control algorithms such as feedback linearization and dynamic inversion is heavily dependent on the fidelity of the dynamic model being inverted. Incomplete or incorrect knowledge of the dynamics results in reduced performance and may lead to instability. Augmenting the baseline controller with approximators which utilize a parametrization structure that is adapted online reduces the effect of this error between the design model and actual dynamics. However, currently existing parameterizations employ a fixed set of basis functions that do not guarantee arbitrary tracking error performance. To address this problem, we develop a self-organizing parametrization structure that is proven to be stable and can guarantee arbitrary tracking error performance. The training algorithm to grow the network and adapt the parameters is derived from Lyapunov theory. In addition to growing the network of basis functions, a pruning strategy is incorporated to keep the size of the network as small as possible. This algorithm is implemented on a high performance flight vehicle such as F-15 military aircraft. The baseline dynamic inversion controller is augmented with a Self-Organizing Radial Basis Function Network (SORBFN) to minimize the effect of the inversion error which may occur due to imperfect modeling, approximate inversion or sudden changes in aircraft dynamics. The dynamic inversion controller is simulated for different situations including control surface failures, modeling errors and external disturbances with and without the adaptive network. A performance measure of maximum tracking error is specified for both the controllers a priori. Excellent tracking error minimization to a pre-specified level using the adaptive approximation based controller was achieved while the baseline dynamic inversion controller failed to meet this performance specification. The performance of the SORBFN based controller is also compared to a fixed RBF network based adaptive controller. While the fixed RBF network based controller which is tuned to compensate for control surface failures fails to achieve the same performance under modeling uncertainty and disturbances, the SORBFN is able to achieve good tracking convergence under all error conditions.
Bolandzadeh, Niousha; Kording, Konrad; Salowitz, Nicole; Davis, Jennifer C; Hsu, Liang; Chan, Alison; Sharma, Devika; Blohm, Gunnar; Liu-Ambrose, Teresa
2015-01-01
Current research suggests that the neuropathology of dementia-including brain changes leading to memory impairment and cognitive decline-is evident years before the onset of this disease. Older adults with cognitive decline have reduced functional independence and quality of life, and are at greater risk for developing dementia. Therefore, identifying biomarkers that can be easily assessed within the clinical setting and predict cognitive decline is important. Early recognition of cognitive decline could promote timely implementation of preventive strategies. We included 89 community-dwelling adults aged 70 years and older in our study, and collected 32 measures of physical function, health status and cognitive function at baseline. We utilized an L1-L2 regularized regression model (elastic net) to identify which of the 32 baseline measures were strongly predictive of cognitive function after one year. We built three linear regression models: 1) based on baseline cognitive function, 2) based on variables consistently selected in every cross-validation loop, and 3) a full model based on all the 32 variables. Each of these models was carefully tested with nested cross-validation. Our model with the six variables consistently selected in every cross-validation loop had a mean squared prediction error of 7.47. This number was smaller than that of the full model (115.33) and the model with baseline cognitive function (7.98). Our model explained 47% of the variance in cognitive function after one year. We built a parsimonious model based on a selected set of six physical function and health status measures strongly predictive of cognitive function after one year. In addition to reducing the complexity of the model without changing the model significantly, our model with the top variables improved the mean prediction error and R-squared. These six physical function and health status measures can be easily implemented in a clinical setting.
Error baseline rates of five sample preparation methods used to characterize RNA virus populations.
Kugelman, Jeffrey R; Wiley, Michael R; Nagle, Elyse R; Reyes, Daniel; Pfeffer, Brad P; Kuhn, Jens H; Sanchez-Lockhart, Mariano; Palacios, Gustavo F
2017-01-01
Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA) as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5) of all compared methods.
Error baseline rates of five sample preparation methods used to characterize RNA virus populations
Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.
2017-01-01
Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717
Geodesy by radio interferometry - Water vapor radiometry for estimation of the wet delay
NASA Technical Reports Server (NTRS)
Elgered, G.; Davis, J. L.; Herring, T. A.; Shapiro, I. I.
1991-01-01
An important source of error in VLBI estimates of baseline length is unmodeled variations of the refractivity of the neutral atmosphere along the propagation path of the radio signals. This paper presents and discusses the method of using data from a water vapor radiomete (WVR) to correct for the propagation delay caused by atmospheric water vapor, the major cause of these variations. Data from different WVRs are compared with estimated propagation delays obtained by Kalman filtering of the VLBI data themselves. The consequences of using either WVR data or Kalman filtering to correct for atmospheric propagation delay at the Onsala VLBI site are investigated by studying the repeatability of estimated baseline lengths from Onsala to several other sites. The repeatability obtained for baseline length estimates shows that the methods of water vapor radiometry and Kalman filtering offer comparable accuracies when applied to VLBI observations obtained in the climate of the Swedish west coast. For the most frequently measured baseline in this study, the use of WVR data yielded a 13 percent smaller weighted-root-mean-square (WRMS) scatter of the baseline length estimates compared to the use of a Kalman filter. It is also clear that the 'best' minimum elevationi angle for VLBI observations depends on the accuracy of the determinations of the total propagation delay to be used, since the error in this delay increases with increasing air mass.
Booth, Rachelle; Sturgess, Emma; Taberner-Stokes, Alison; Peters, Mark
2012-11-01
To establish the baseline prescribing error rate in a tertiary paediatric intensive care unit (PICU) and to determine the impact of a zero tolerance prescribing (ZTP) policy incorporating a dedicated prescribing area and daily feedback of prescribing errors. A prospective, non-blinded, observational study was undertaken in a 12-bed tertiary PICU over a period of 134 weeks. Baseline prescribing error data were collected on weekdays for all patients for a period of 32 weeks, following which the ZTP policy was introduced. Daily error feedback was introduced after a further 12 months. Errors were sub-classified as 'clinical', 'non-clinical' and 'infusion prescription' errors and the effects of interventions considered separately. The baseline combined prescribing error rate was 892 (95 % confidence interval (CI) 765-1,019) errors per 1,000 PICU occupied bed days (OBDs), comprising 25.6 % clinical, 44 % non-clinical and 30.4 % infusion prescription errors. The combined interventions of ZTP plus daily error feedback were associated with a reduction in the combined prescribing error rate to 447 (95 % CI 389-504) errors per 1,000 OBDs (p < 0.0001), an absolute risk reduction of 44.5 % (95 % CI 40.8-48.0 %). Introduction of the ZTP policy was associated with a significant decrease in clinical and infusion prescription errors, while the introduction of daily error feedback was associated with a significant reduction in non-clinical prescribing errors. The combined interventions of ZTP and daily error feedback were associated with a significant reduction in prescribing errors in the PICU, in line with Department of Health requirements of a 40 % reduction within 5 years.
NASA Technical Reports Server (NTRS)
King, R. W., Jr.
1975-01-01
The technique of differential very-long baseline interferometry was used to measure the relative positions of the ALSEP transmitters at the Apollo 12, 14, 15, 16, and 17 lunar landing sites with uncertainties less than 0.005 of geocentric arc. These measurements yielded improved determinations of the selenodetic coordinates of the Apollo landing sites, and of the physical libration of the moon. By means of a new device, the differential Doppler receiver (DDR), instrumental errors were reduced to less than the equivalent of 0.001. DDRs were installed in six stations of the NASA spaceflight tracking and data network and used in an extensive program of observations beginning in March 1973.
An Improved Rank Correlation Effect Size Statistic for Single-Case Designs: Baseline Corrected Tau.
Tarlow, Kevin R
2017-07-01
Measuring treatment effects when an individual's pretreatment performance is improving poses a challenge for single-case experimental designs. It may be difficult to determine whether improvement is due to the treatment or due to the preexisting baseline trend. Tau- U is a popular single-case effect size statistic that purports to control for baseline trend. However, despite its strengths, Tau- U has substantial limitations: Its values are inflated and not bound between -1 and +1, it cannot be visually graphed, and its relatively weak method of trend control leads to unacceptable levels of Type I error wherein ineffective treatments appear effective. An improved effect size statistic based on rank correlation and robust regression, Baseline Corrected Tau, is proposed and field-tested with both published and simulated single-case time series. A web-based calculator for Baseline Corrected Tau is also introduced for use by single-case investigators.
A water-vapor radiometer error model. [for ionosphere in geodetic microwave techniques
NASA Technical Reports Server (NTRS)
Beckman, B.
1985-01-01
The water-vapor radiometer (WVR) is used to calibrate unpredictable delays in the wet component of the troposphere in geodetic microwave techniques such as very-long-baseline interferometry (VLBI) and Global Positioning System (GPS) tracking. Based on experience with Jet Propulsion Laboratory (JPL) instruments, the current level of accuracy in wet-troposphere calibration limits the accuracy of local vertical measurements to 5-10 cm. The goal for the near future is 1-3 cm. Although the WVR is currently the best calibration method, many instruments are prone to systematic error. In this paper, a treatment of WVR data is proposed and evaluated. This treatment reduces the effect of WVR systematic errors by estimating parameters that specify an assumed functional form for the error. The assumed form of the treatment is evaluated by comparing the results of two similar WVR's operating near each other. Finally, the observability of the error parameters is estimated by covariance analysis.
Intrafractional baseline drift during free breathing breast cancer radiation therapy.
Jensen, Christer Andre; Acosta Roa, Ana María; Lund, Jo-Åsmund; Frengen, Jomar
2017-06-01
Intrafraction motion in breast cancer radiation therapy (BCRT) has not yet been thoroughly described in the literature. It has been observed that baseline drift occurs as part of the intrafraction motion. This study aims to measure baseline drift and its incidence in free-breathing BCRT patients using an in-house developed laser system for tracking the position of the sternum. Baseline drift was monitored in 20 right-sided breast cancer patients receiving free breathing 3D-conformal RT by using an in-house developed laser system which measures one-dimensional distance in the AP direction. A total of 357 patient respiratory traces from treatment sessions were logged and analysed. Baseline drift was compared to patient positioning error measured from in-field portal imaging. The mean overall baseline drift at end of treatment sessions was -1.3 mm for the patient population. Relatively small baseline drift was observed during the first fraction; however it was clearly detected already at the second fraction. Over 90% of the baseline drift occurs during the first 3 min of each treatment session. The baseline drift rate for the population was -0.5 ± 0.2 mm/min in the posterior direction the first minute after localization. Only 4% of the treatment sessions had a 5 mm or larger baseline drift at 5 min, all towards the posterior direction. Mean baseline drift in the posterior direction in free breathing BCRT was observed in 18 of 20 patients over all treatment sessions. This study shows that there is a substantial baseline drift in free breathing BCRT patients. No clear baseline drift was observed during the first treatment session; however, baseline drift was markedly present at the rest of the sessions. Intrafraction motion due to baseline drift should be accounted for in margin calculations.
Evaluation of very long baseline interferometry atmospheric modeling improvements
NASA Technical Reports Server (NTRS)
Macmillan, D. S.; Ma, C.
1994-01-01
We determine the improvement in baseline length precision and accuracy using new atmospheric delay mapping functions and MTT by analyzing the NASA Crustal Dynamics Project research and development (R&D) experiments and the International Radio Interferometric Surveying (IRIS) A experiments. These mapping functions reduce baseline length scatter by about 20% below that using the CfA2.2 dry and Chao wet mapping functions. With the newer mapping functions, average station vertical scatter inferred from observed length precision (given by length repeatabilites) is 11.4 mm for the 1987-1990 monthly R&D series of experiments and 5.6 mm for the 3-week-long extended research and development experiment (ERDE) series. The inferred monthly R&D station vertical scatter is reduced by 2 mm or by 7 mm is a root-sum-square (rss) sense. Length repeatabilities are optimum when observations below a 7-8 deg elevation cutoff are removed from the geodetic solution. Analyses of IRIS-A data from 1984 through 1991 and the monthly R&D experiments both yielded a nonatmospheric unmodeled station vertical error or about 8 mm. In addition, analysis of the IRIS-A exeriments revealed systematic effects in the evolution of some baseline length measurements. The length rate of change has an apparent acceleration, and the length evolution has a quasi-annual signature. We show that the origin of these effects is unlikely to be related to atmospheric modeling errors. Rates of change of the transatlantic Westford-Wettzell and Richmond-Wettzell baseline lengths calculated from 1988 through 1991 agree with the NUVEL-1 plate motion model (Argus and Gordon, 1991) to within 1 mm/yr. Short-term (less than 90 days) variations of IRIS-A baseline length measurements contribute more than 90% of the observed scatter about a best fit line, and this short-term scatter has large variations on an annual time scale.
Zhao, Ke; Ji, Yaoyao; Li, Yan; Li, Ting
2018-01-21
Near-infrared spectroscopy (NIRS) has become widely accepted as a valuable tool for noninvasively monitoring hemodynamics for clinical and diagnostic purposes. Baseline shift has attracted great attention in the field, but there has been little quantitative study on baseline removal. Here, we aimed to study the baseline characteristics of an in-house-built portable medical NIRS device over a long time (>3.5 h). We found that the measured baselines all formed perfect polynomial functions on phantom tests mimicking human bodies, which were identified by recent NIRS studies. More importantly, our study shows that the fourth-order polynomial function acted to distinguish performance with stable and low-computation-burden fitting calibration (R-square >0.99 for all probes) among second- to sixth-order polynomials, evaluated by the parameters R-square, sum of squares due to error, and residual. This study provides a straightforward, efficient, and quantitatively evaluated solution for online baseline removal for hemodynamic monitoring using NIRS devices.
How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?
Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C
2016-10-01
The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.
A New Approach to Estimate Forest Parameters Using Dual-Baseline Pol-InSAR Data
NASA Astrophysics Data System (ADS)
Bai, L.; Hong, W.; Cao, F.; Zhou, Y.
2009-04-01
In POL-InSAR applications using ESPRIT technique, it is assumed that there exist stable scattering centres in the forest. However, the observations in forest severely suffer from volume and temporal decorrelation. The forest scatters are not stable as assumed. The obtained interferometric information is not accurate as expected. Besides, ESPRIT techniques could not identify the interferometric phases corresponding to the ground and the canopy. It provides multiple estimations for the height between two scattering centers due to phase unwrapping. Therefore, estimation errors are introduced to the forest height results. To suppress the two types of errors, we use the dual-baseline POL-InSAR data to estimate forest height. Dual-baseline coherence optimization is applied to obtain interferometric information of stable scattering centers in the forest. From the interferometric phases for different baselines, estimation errors caused by phase unwrapping is solved. Other estimation errors can be suppressed, too. Experiments are done to the ESAR L band POL-InSAR data. Experimental results show the proposed methods provide more accurate forest height than ESPRIT technique.
(abstract) A VLBI Test of Tropospheric Delay Calibration with WVRs
NASA Technical Reports Server (NTRS)
Linfield, R. P.; Teitelbaum, L. P.; Keihm, S. J.; Resch, G. M.; Mahoney, M. J.; Treuhaft, R. N.
1994-01-01
Dual frequency (S/X band) very long baseline interferometry (VLBI) observations were used to test troposphere calibration by water vapor radiometers (WVRs). Comparison of the VLBI and WVR measurements show a statistical agreement (specifically, their structure functions agree) on time scales less than 700 seconds. On longer time scales, VLBI instrumental errors become important. The improvement in VLBI residual delays from WVR calibration was consistent with the measured level of tropospheric fluctuations.
Multiple Intravenous Infusions Phase 2b: Laboratory Study
Pinkney, Sonia; Fan, Mark; Chan, Katherine; Koczmara, Christine; Colvin, Christopher; Sasangohar, Farzan; Masino, Caterina; Easty, Anthony; Trbovich, Patricia
2014-01-01
Background Administering multiple intravenous (IV) infusions to a single patient via infusion pump occurs routinely in health care, but there has been little empirical research examining the risks associated with this practice or ways to mitigate those risks. Objectives To identify the risks associated with multiple IV infusions and assess the impact of interventions on nurses’ ability to safely administer them. Data Sources and Review Methods Forty nurses completed infusion-related tasks in a simulated adult intensive care unit, with and without interventions (i.e., repeated-measures design). Results Errors were observed in completing common tasks associated with the administration of multiple IV infusions, including the following (all values from baseline, which was current practice): setting up and programming multiple primary continuous IV infusions (e.g., 11.7% programming errors) identifying IV infusions (e.g., 7.7% line-tracing errors) managing dead volume (e.g., 96.0% flush rate errors following IV syringe dose administration) setting up a secondary intermittent IV infusion (e.g., 11.3% secondary clamp errors) administering an IV pump bolus (e.g., 11.5% programming errors) Of 10 interventions tested, 6 (1 practice, 3 technology, and 2 educational) significantly decreased or even eliminated errors compared to baseline. Limitations The simulation of an adult intensive care unit at 1 hospital limited the ability to generalize results. The study results were representative of nurses who received training in the interventions but had little experience using them. The longitudinal effects of the interventions were not studied. Conclusions Administering and managing multiple IV infusions is a complex and risk-prone activity. However, when a patient requires multiple IV infusions, targeted interventions can reduce identified risks. A combination of standardized practice, technology improvements, and targeted education is required. PMID:26316919
Interferometric detection of freeze-thaw displacements of Alaskan permafrost using ERS-1 data
NASA Technical Reports Server (NTRS)
Werner, Charles L.; Gabriel, Andrew K.
1993-01-01
The possibility of making large scale (50 km) measurements of motions of the earth's surface with high resolution (10 m) and very high accuracy (1 cm) from multipass SAR interferometry was established in 1989. Other experiments have confirmed the viability and usefulness of the method. Work is underway in various groups to measure displacements from volcanic activity, seismic events, glacier motion, and in the present study, freeze-thaw cycles in Alaskan permafrost. The ground is known to move significantly in these cycles, and provided that freezing does not cause image decorrelation, it should be possible to measure both ground swelling and subsidence. The authors have obtained data from multiple passes of ERS-1 over the Toolik Lake region of northern Alaska of suitable quality for interferometry. The data are processed into images, and single interferograms are formed in the usual manner. Phase unwrapping is performed, and the multipass baselines are estimated from the images using both orbit ephemerides and scene tie points. The phases are scaled by the baseline ratio, and a double-difference interferogram (DDI) is formed. It is found that there is a residual 'saddle-shape' phase error across the image, which is postulated to be caused by a small divergence (10(exp -2) deg.) in the orbits. A simulation of a DDI from divergent orbits confirms the shape and magnitude of the error. A two-dimensional least squares fit to the error is performed, which is used to correct the DDI. The final, corrected DDI shows significant phase (altitude) changes over the period of the observation.
Effects of repeated walking in a perturbing environment: a 4-day locomotor learning study.
Blanchette, Andreanne; Moffet, Helene; Roy, Jean-Sébastien; Bouyer, Laurent J
2012-07-01
Previous studies have shown that when subjects repeatedly walk in a perturbing environment, initial movement error becomes smaller, suggesting that retention of the adapted locomotor program occurred (learning). It has been proposed that the newly learned locomotor program may be stored separately from the baseline program. However, how locomotor performance evolves with repeated sessions of walking with the perturbation is not yet known. To address this question, 10 healthy subjects walked on a treadmill on 4 consecutive days. Each day, locomotor performance was measured using kinematics and surface electromyography (EMGs), before, during, and after exposure to a perturbation, produced by an elastic tubing that pulled the foot forward and up during swing, inducing a foot velocity error in the first strides. Initial movement error decreased significantly between days 1 and 2 and then remained stable. Associated changes in medial hamstring EMG activity stabilized only on day 3, however. Aftereffects were present after perturbation removal, suggesting that daily adaptation involved central command recalibration of the baseline program. Aftereffects gradually decreased across days but were still visible on day 4. Separation between the newly learned and baseline programs may take longer than suggested by the daily improvement in initial performance in the perturbing environment or may never be complete. These results therefore suggest that reaching optimal performance in a perturbing environment should not be used as the main indicator of a completed learning process, as central reorganization of the motor commands continues days after initial performance has stabilized.
Rhodes, Alison M; Tran, Thanh V
2013-02-01
This study examined the equivalence or comparability of the measurement properties of seven selected items measuring posttraumatic growth among self-identified Black (n = 270) and White (n = 707) adult survivors of Hurricane Katrina, using data from the Baseline Survey of the Hurricane Katrina Community Advisory Group Study. Internal consistency reliability was equally good for both groups (Cronbach's alphas = .79), as were correlations between individual scale items and their respective overall scale. Confirmatory factor analysis of a congeneric measurement model of seven selected items of posttraumatic growth showed adequate measures of fit for both groups. The results showed only small variation in magnitude of factor loadings and measurement errors between the two samples. Tests of measurement invariance showed mixed results, but overall indicated that factor loading, error variance, and factor variance were similar between the two samples. These seven selected items can be useful for future large-scale surveys of posttraumatic growth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boehnke, E McKenzie; DeMarco, J; Steers, J
2016-06-15
Purpose: To examine both the IQM’s sensitivity and false positive rate to varying MLC errors. By balancing these two characteristics, an optimal tolerance value can be derived. Methods: An un-modified SBRT Liver IMRT plan containing 7 fields was randomly selected as a representative clinical case. The active MLC positions for all fields were perturbed randomly from a square distribution of varying width (±1mm to ±5mm). These unmodified and modified plans were measured multiple times each by the IQM (a large area ion chamber mounted to a TrueBeam linac head). Measurements were analyzed relative to the initial, unmodified measurement. IQM readingsmore » are analyzed as a function of control points. In order to examine sensitivity to errors along a field’s delivery, each measured field was divided into 5 groups of control points, and the maximum error in each group was recorded. Since the plans have known errors, we compared how well the IQM is able to differentiate between unmodified and error plans. ROC curves and logistic regression were used to analyze this, independent of thresholds. Results: A likelihood-ratio Chi-square test showed that the IQM could significantly predict whether a plan had MLC errors, with the exception of the beginning and ending control points. Upon further examination, we determined there was ramp-up occurring at the beginning of delivery. Once the linac AFC was tuned, the subsequent measurements (relative to a new baseline) showed significant (p <0.005) abilities to predict MLC errors. Using the area under the curve, we show the IQM’s ability to detect errors increases with increasing MLC error (Spearman’s Rho=0.8056, p<0.0001). The optimal IQM count thresholds from the ROC curves are ±3%, ±2%, and ±7% for the beginning, middle 3, and end segments, respectively. Conclusion: The IQM has proven to be able to detect not only MLC errors, but also differences in beam tuning (ramp-up). Partially supported by the Susan Scott Foundation.« less
Standardising analysis of carbon monoxide rebreathing for application in anti-doping.
Alexander, Anthony C; Garvican, Laura A; Burge, Caroline M; Clark, Sally A; Plowman, James S; Gore, Christopher J
2011-03-01
Determination of total haemoglobin mass (Hbmass) via carbon monoxide (CO) depends critically on repeatable measurement of percent carboxyhaemoglobin (%HbCO) in blood with a hemoximeter. The main aim of this study was to determine, for an OSM3 hemoximeter, the number of replicate measures as well as the theoretical change in percent carboxyhaemoglobin required to yield a random error of analysis (Analyser Error) of ≤1%. Before and after inhalation of CO, nine participants provided a total of 576 blood samples that were each analysed five times for percent carboxyhaemoglobin on one of three OSM3 hemoximeters; with approximately one-third of blood samples analysed on each OSM3. The Analyser Error was calculated for the first two (duplicate), first three (triplicate) and first four (quadruplicate) measures on each OSM3, as well as for all five measures (quintuplicates). Two methods of CO-rebreathing, a 2-min and 10-min procedure, were evaluated for Analyser Error. For duplicate analyses of blood, the Analyser Error for the 2-min method was 3.7, 4.0 and 5.0% for the three OSM3s when the percent carboxyhaemoglobin increased by two above resting values. With quintuplicate analyses of blood, the corresponding errors reduced to .8, .9 and 1.0% for the 2-min method when the percent carboxyhaemoglobin increased by 5.5 above resting values. In summary, to minimise the Analyser Error to ∼≤1% on an OSM3 hemoximeter, researchers should make ≥5 replicates of percent carboxyhaemoglobin and the volume of CO administered should be sufficient increase percent carboxyhaemoglobin by ≥5.5 above baseline levels. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhao, Ke; Ji, Yaoyao; Pan, Boan; Li, Ting
2018-02-01
The continuous-wave Near-infrared spectroscopy (NIRS) devices have been highlighted for its clinical and health care applications in noninvasive hemodynamic measurements. The baseline shift of the deviation measurement attracts lots of attentions for its clinical importance. Nonetheless current published methods have low reliability or high variability. In this study, we found a perfect polynomial fitting function for baseline removal, using NIRS. Unlike previous studies on baseline correction for near-infrared spectroscopy evaluation of non-hemodynamic particles, we focused on baseline fitting and corresponding correction method for NIRS and found that the polynomial fitting function at 4th order is greater than the function at 2nd order reported in previous research. Through experimental tests of hemodynamic parameters of the solid phantom, we compared the fitting effect between the 4th order polynomial and the 2nd order polynomial, by recording and analyzing the R values and the SSE (the sum of squares due to error) values. The R values of the 4th order polynomial function fitting are all higher than 0.99, which are significantly higher than the corresponding ones of 2nd order, while the SSE values of the 4th order are significantly smaller than the corresponding ones of the 2nd order. By using the high-reliable and low-variable 4th order polynomial fitting function, we are able to remove the baseline online to obtain more accurate NIRS measurements.
Developing Best Practices for Detecting Change at Marine Renewable Energy Sites
NASA Astrophysics Data System (ADS)
Linder, H. L.; Horne, J. K.
2016-02-01
In compliance with the National Environmental Policy Act (NEPA), an evaluation of environmental effects is mandatory for obtaining permits for any Marine Renewable Energy (MRE) project in the US. Evaluation includes an assessment of baseline conditions and on-going monitoring during operation to determine if biological conditions change relative to the baseline. Currently, there are no best practices for the analysis of MRE monitoring data. We have developed an approach to evaluate and recommend analytic models used to characterize and detect change in biological monitoring data. The approach includes six steps: review current MRE monitoring practices, identify candidate models to analyze data, fit models to a baseline dataset, develop simulated scenarios of change, evaluate model fit to simulated data, and produce recommendations on the choice of analytic model for monitoring data. An empirical data set from a proposed tidal turbine site at Admiralty Inlet, Puget Sound, Washington was used to conduct the model evaluation. Candidate models that were evaluated included: linear regression, time series, and nonparametric models. Model fit diagnostics Root-Mean-Square-Error and Mean-Absolute-Scaled-Error were used to measure accuracy of predicted values from each model. A power analysis was used to evaluate the ability of each model to measure and detect change from baseline conditions. As many of these models have yet to be applied in MRE monitoring studies, results of this evaluation will generate comprehensive guidelines on choice of model to detect change in environmental monitoring data from MRE sites. The creation of standardized guidelines for model selection enables accurate comparison of change between life stages of a MRE project, within life stages to meet real time regulatory requirements, and comparison of environmental changes among MRE sites.
Error analyses of JEM/SMILES standard products on L2 operational system
NASA Astrophysics Data System (ADS)
Mitsuda, C.; Takahashi, C.; Suzuki, M.; Hayashi, H.; Imai, K.; Sano, T.; Takayanagi, M.; Iwata, Y.; Taniguchi, H.
2009-12-01
SMILES (Superconducting Submillimeter-wave Limb-Emission Sounder) , which has been developed by Japan Aerospace Exploration Agency (JAXA) and National Institute of Information and Communications Technology (NICT), is planned to be launched in September, 2009 and will be on board the Japanese Experiment Module (JEM) of the International Space Station (ISS). The SMILES measures the atmospheric limb emission from stratospheric minor constituents in 640 GHz band. Target species on L2 operational system are O3, ClO, HCl, HNO3, HOCl, CH3CN, HO2, BrO, and O3 isotopes (18OOO, 17OOO and O17OO). The SMILES carries 4 K cooled Superconductor-Insulator-Superconductor mixers to carry out high-sensitivity observations. In sub-millimeter band, water vapor absorption is an important factor to decide the tropospheric and stratospheric brightness temperature. The uncertainty of water vapor absorption influences the accuracy of molecular vertical profiles. Since the SMILES bands are narrow and far from H2O lines, it is a good approximation to assume this uncertainly as linear function of frequency. We include 0th and 1st coefficients of ‘baseline’ function, not water vapor profile, in state vector and retrieve them to remove influence of the water vapor uncertainty. We performed retrieval simulations using spectra computed by L2 operatinal forward model for various H2O conditions (-/+ 5, 10% difference between true profile and a priori profile in the stratosphere and -/+ 10, 20% one in the troposphere). The results show that the incremental errors of molecules are smaller than 10% of measurements errors when height correlation of baseline coefficients and temperature are assumed to be 10 km. In conclusion, the retrieval of the baseline coefficients effectively suppresses profile error due to bias of water vapor profile.
International Round-Robin Testing of Bulk Thermoelectrics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hsin; Porter, Wallace D; Bottner, Harold
2011-11-01
Two international round-robin studies were conducted on transport properties measurements of bulk thermoelectric materials. The study discovered current measurement problems. In order to get ZT of a material four separate transport measurements must be taken. The round-robin study showed that among the four properties Seebeck coefficient is the one can be measured consistently. Electrical resistivity has +4-9% scatter. Thermal diffusivity has similar +5-10% scatter. The reliability of the above three properties can be improved by standardizing test procedures and enforcing system calibrations. The worst problem was found in specific heat measurements using DSC. The probability of making measurement error ismore » great due to the fact three separate runs must be taken to determine Cp and the baseline shift is always an issue for commercial DSC. It is suggest the Dulong Petit limit be always used as a guide line for Cp. Procedures have been developed to eliminate operator and system errors. The IEA-AMT annex is developing standard procedures for transport properties testing.« less
Bishop, Lauri; Khan, Moiz; Martelli, Dario; Quinn, Lori; Stein, Joel; Agrawal, Sunil
2017-10-01
Many robotic devices in rehabilitation incorporate an assist-as-needed haptic guidance paradigm to promote training. This error reduction model, while beneficial for skill acquisition, could be detrimental for long-term retention. Error augmentation (EA) models have been explored as alternatives. A robotic Tethered Pelvic Assist Device has been developed to study force application to the pelvis on gait and was used here to induce weight shift onto the paretic (error reduction) or nonparetic (error augmentation) limb during treadmill training. The purpose of these case reports is to examine effects of training with these two paradigms to reduce load force asymmetry during gait in two individuals after stroke (>6 mos). Participants presented with baseline gait asymmetry, although independent community ambulators. Participants underwent 1-hr trainings for 3 days using either the error reduction or error augmentation model. Outcomes included the Borg rating of perceived exertion scale for treatment tolerance and measures of force and stance symmetry. Both participants tolerated training. Force symmetry (measured on treadmill) improved from pretraining to posttraining (36.58% and 14.64% gains), however, with limited transfer to overground gait measures (stance symmetry gains of 9.74% and 16.21%). Training with the Tethered Pelvic Assist Device device proved feasible to improve force symmetry on the treadmill irrespective of training model. Future work should consider methods to increase transfer to overground gait.
NASA Technical Reports Server (NTRS)
Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert
2004-01-01
The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices reside with the siderostat. The non-common vertex error (NCVE) is treated as a second example. Finally combination of models, and various other errors are discussed.
Atmospheric pressure loading effects on Global Positioning System coordinate determinations
NASA Technical Reports Server (NTRS)
Vandam, Tonie M.; Blewitt, Geoffrey; Heflin, Michael B.
1994-01-01
Earth deformation signals caused by atmospheric pressure loading are detected in vertical position estimates at Global Positioning System (GPS) stations. Surface displacements due to changes in atmospheric pressure account for up to 24% of the total variance in the GPS height estimates. The detected loading signals are larger at higher latitudes where pressure variations are greatest; the largest effect is observed at Fairbanks, Alaska (latitude 65 deg), with a signal root mean square (RMS) of 5 mm. Out of 19 continuously operating GPS sites (with a mean of 281 daily solutions per site), 18 show a positive correlation between the GPS vertical estimates and the modeled loading displacements. Accounting for loading reduces the variance of the vertical station positions on 12 of the 19 sites investigated. Removing the modeled pressure loading from GPS determinations of baseline length for baselines longer than 6000 km reduces the variance on 73 of the 117 baselines investigated. The slight increase in variance for some of the sites and baselines is consistent with expected statistical fluctuations. The results from most stations are consistent with approximately 65% of the modeled pressure load being found in the GPS vertical position measurements. Removing an annual signal from both the measured heights and the modeled load time series leaves this value unchanged. The source of the remaining discrepancy between the modeled and observed loading signal may be the result of (1) anisotropic effects in the Earth's loading response, (2) errors in GPS estimates of tropospheric delay, (3) errors in the surface pressure data, or (4) annual signals in the time series of loading and station heights. In addition, we find that using site dependent coefficients, determined by fitting local pressure to the modeled radial displacements, reduces the variance of the measured station heights as well as or better than using the global convolution sum.
NASA Technical Reports Server (NTRS)
Neumann, Maxim; Hensley, Scott; Lavalle, Marco; Ahmed, Razi
2013-01-01
This paper concerns forest remote sensing using JPL's multi-baseline polarimetric interferometric UAVSAR data. It presents exemplary results and analyzes the possibilities and limitations of using SAR Tomography and Polarimetric SAR Interferometry (PolInSAR) techniques for the estimation of forest structure. Performance and error indicators for the applicability and reliability of the used multi-baseline (MB) multi-temporal (MT) PolInSAR random volume over ground (RVoG) model are discussed. Experimental results are presented based on JPL's L-band repeat-pass polarimetric interferometric UAVSAR data over temperate and tropical forest biomes in the Harvard Forest, Massachusetts, and in the La Amistad Park, Panama and Costa Rica. The results are partially compared with ground field measurements and with air-borne LVIS lidar data.
NASA Technical Reports Server (NTRS)
Neumann, Maxim; Hensley, Scott; Lavalle, Marco; Ahmed, Razi
2013-01-01
This paper concerns forest remote sensing using JPL's multi-baseline polarimetric interferometric UAVSAR data. It presents exemplary results and analyzes the possibilities and limitations of using SAR Tomography and Polarimetric SAR Interferometry (PolInSAR) techniques for the estimation of forest structure. Performance and error indicators for the applicability and reliability of the used multi-baseline (MB) multi-temporal (MT) PolInSAR random volume over ground (RVoG) model are discussed. Experimental results are presented based on JPL's L-band repeat-pass polarimetric interferometric UAVSAR data over temperate and tropical forest biomes in the Harvard Forest, Massachusetts, and in the La Amistad Park, Panama and Costa Rica. The results are partially compared with ground field measurements and with air-borne LVIS lidar data.
Possibility of measuring Adler angles in charged current single pion neutrino-nucleus interactions
NASA Astrophysics Data System (ADS)
Sánchez, F.
2016-05-01
Uncertainties in modeling neutrino-nucleus interactions are a major contribution to systematic errors in long-baseline neutrino oscillation experiments. Accurate modeling of neutrino interactions requires additional experimental observables such as the Adler angles which carry information about the polarization of the Δ resonance and the interference with nonresonant single pion production. The Adler angles were measured with limited statistics in bubble chamber neutrino experiments as well as in electron-proton scattering experiments. We discuss the viability of measuring these angles in neutrino interactions with nuclei.
The precision of a special purpose analog computer in clinical cardiac output determination.
Sullivan, F J; Mroz, E A; Miller, R E
1975-01-01
Three hundred dye-dilution curves taken during our first year of clinical experience with the Waters CO-4 cardiac output computer were analyzed to estimate the errors involved in its use. Provided that calibration is accurate and 5.0 mg of dye are injected for each curve, then the percentage standard deviation of measurement using this computer is about 8.7%. Included in this are the errors inherent in the computer, errors due to baseline drift, errors in the injection of dye and acutal variation of cardiac output over a series of successive determinations. The size of this error is comparable to that involved in manual calculation. The mean value of five successive curves will be within 10% of the real value in 99 cases out of 100. Advances in methodology and equipment are discussed which make calibration simpler and more accurate, and which should also improve the quality of computer determination. A list of suggestions is given to minimize the errors involved in the clinical use of this equipment. Images Fig. 4. PMID:1089394
A Conjoint Analysis Framework for Evaluating User Preferences in Machine Translation
Kirchhoff, Katrin; Capurro, Daniel; Turner, Anne M.
2013-01-01
Despite much research on machine translation (MT) evaluation, there is surprisingly little work that directly measures users’ intuitive or emotional preferences regarding different types of MT errors. However, the elicitation and modeling of user preferences is an important prerequisite for research on user adaptation and customization of MT engines. In this paper we explore the use of conjoint analysis as a formal quantitative framework to assess users’ relative preferences for different types of translation errors. We apply our approach to the analysis of MT output from translating public health documents from English into Spanish. Our results indicate that word order errors are clearly the most dispreferred error type, followed by word sense, morphological, and function word errors. The conjoint analysis-based model is able to predict user preferences more accurately than a baseline model that chooses the translation with the fewest errors overall. Additionally we analyze the effect of using a crowd-sourced respondent population versus a sample of domain experts and observe that main preference effects are remarkably stable across the two samples. PMID:24683295
Ehgoetz Martens, Kaylena A; Ellard, Colin G; Almeida, Quincy J
2015-03-01
Although dopaminergic replacement therapy is believed to improve sensory processing in PD, while delayed perceptual speed is thought to be caused by a predominantly cholinergic deficit, it is unclear whether sensory-perceptual deficits are a result of corrupt sensory processing, or a delay in updating perceived feedback during movement. The current study aimed to examine these two hypotheses by manipulating visual flow speed and dopaminergic medication to examine which influenced distance estimation in PD. Fourteen PD and sixteen HC participants were instructed to estimate the distance of a remembered target by walking to the position the target formerly occupied. This task was completed in virtual reality in order to manipulate the visual flow (VF) speed in real time. Three conditions were carried out: (1) BASELINE: VF speed was equal to participants' real-time movement speed; (2) SLOW: VF speed was reduced by 50 %; (2) FAST: VF speed was increased by 30 %. Individuals with PD performed the experiment in their ON and OFF state. PD demonstrated significantly greater judgement error during BASELINE and FAST conditions compared to HC, although PD did not improve their judgement error during the SLOW condition. Additionally, PD had greater variable error during baseline compared to HC; however, during the SLOW conditions, PD had significantly less variable error compared to baseline and similar variable error to HC participants. Overall, dopaminergic medication did not significantly influence judgement error. Therefore, these results suggest that corrupt processing of sensory information is the main contributor to sensory-perceptual deficits during movement in PD rather than delayed updating of sensory feedback.
DiStefano, Lindsay J; Padua, Darin A; DiStefano, Michael J; Marshall, Stephen W
2009-03-01
Anterior cruciate ligament (ACL) injury prevention programs show promising results with changing movement; however, little information exists regarding whether a program designed for an individual's movements may be effective or how baseline movements may affect outcomes. A program designed to change specific movements would be more effective than a "one-size-fits-all" program. Greatest improvement would be observed among individuals with the most baseline error. Subjects of different ages and sexes respond similarly. Randomized controlled trial; Level of evidence, 1. One hundred seventy-three youth soccer players from 27 teams were randomly assigned to a generalized or stratified program. Subjects were videotaped during jump-landing trials before and after the program and were assessed using the Landing Error Scoring System (LESS), which is a valid clinical movement analysis tool. A high LESS score indicates more errors. Generalized players performed the same exercises, while the stratified players performed exercises to correct their initial movement errors. Change scores were compared between groups of varying baseline errors, ages, sexes, and programs. Subjects with the highest baseline LESS score improved the most (95% CI, -3.4 to -2.0). High school subjects (95% CI, -1.7 to -0.98) improved their technique more than pre-high school subjects (95% CI, -1.0 to -0.4). There was no difference between the programs or sexes. Players with the greatest amount of movement errors experienced the most improvement. A program's effectiveness may be enhanced if this population is targeted.
How many drinks did you have on September 11, 2001?
Perrine, M W Bud; Schroder, Kerstin E E
2005-07-01
This study tested the predictability of error in retrospective self-reports of alcohol consumption on September 11, 2001, among 80 Vermont light, medium and heavy drinkers. Subjects were 52 men and 28 women participating in daily self-reports of alcohol consumption for a total of 2 years, collected via interactive voice response technology (IVR). In addition, retrospective self-reports of alcohol consumption on September 11, 2001, were collected by telephone interview 4-5 days following the terrorist attacks. Retrospective error was calculated as the difference between the IVR self-report of drinking behavior on September 11 and the retrospective self-report collected by telephone interview. Retrospective error was analyzed as a function of gender and baseline drinking behavior during the 365 days preceding September 11, 2001 (termed "the baseline"). The intraclass correlation (ICC) between daily IVR and retrospective self-reports of alcohol consumption on September 11 was .80. Women provided, on average, more accurate self-reports (ICC = .96) than men (ICC = .72) but displayed more underreporting bias in retrospective responses. Amount and individual variability of alcohol consumption during the 1-year baseline explained, on average, 11% of the variance in overreporting (r = .33), 9% of the variance in underreporting (r = .30) and 25% of the variance in the overall magnitude of error (r = .50), with correlations up to .62 (r2 = .38). The size and direction of error were clearly predictable from the amount and variation in drinking behavior during the 1-year baseline period. The results demonstrate the utility and detail of information that can be derived from daily IVR self-reports in the analysis of retrospective error.
An accuracy assessment of Magellan Very Long Baseline Interferometry (VLBI)
NASA Technical Reports Server (NTRS)
Engelhardt, D. B.; Kronschnabl, G. R.; Border, J. S.
1990-01-01
Very Long Baseline Interferometry (VLBI) measurements of the Magellan spacecraft's angular position and velocity were made during July through September, 1989, during the spacecraft's heliocentric flight to Venus. The purpose of this data acquisition and reduction was to verify this data type for operational use before Magellan is inserted into Venus orbit, in August, 1990. The accuracy of these measurements are shown to be within 20 nanoradians in angular position, and within 5 picoradians/sec in angular velocity. The media effects and their calibrations are quantified; the wet fluctuating troposphere is the dominant source of measurement error for angular velocity. The charged particle effect is completely calibrated with S- and X-Band dual-frequency calibrations. Increasing the accuracy of the Earth platform model parameters, by using VLBI-derived tracking station locations consistent with the planetary ephemeris frame, and by including high frequency Earth tidal terms in the Earth rotation model, add a few nanoradians improvement to the angular position measurements. Angular velocity measurements were insensitive to these Earth platform modelling improvements.
Tectonic motion site survey of the National Radio Astronomy Observatory, Green Bank, West Virginia
NASA Technical Reports Server (NTRS)
Webster, W. J., Jr.; Allenby, R. J.; Hutton, L. K.; Lowman, P. D., Jr.; Tiedemann, H. A.
1979-01-01
A geological and geophysical site survey was made of the area around the National Radio Astronomy Observatory (NRAO) to determine whether there are at present local tectonic movements that could introduce significant errors to Very Long Baseline Interferometry (VLBI) geodetic measurements. The site survey consisted of a literature search, photogeologic mapping with Landsat and Skylab photographs, a field reconnaissance, and installation of a seismometer at the NRAO. It is concluded that local tectonic movement will not contribute significantly to VLBI errors. It is recommended that similar site surveys be made of all locations used for VLBI or laser ranging.
Kukush, Alexander; Shklyar, Sergiy; Masiuk, Sergii; Likhtarov, Illya; Kovgan, Lina; Carroll, Raymond J; Bouville, Andre
2011-02-16
With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.
Impact of Robotic Antineoplastic Preparation on Safety, Workflow, and Costs
Seger, Andrew C.; Churchill, William W.; Keohane, Carol A.; Belisle, Caryn D.; Wong, Stephanie T.; Sylvester, Katelyn W.; Chesnick, Megan A.; Burdick, Elisabeth; Wien, Matt F.; Cotugno, Michael C.; Bates, David W.; Rothschild, Jeffrey M.
2012-01-01
Purpose: Antineoplastic preparation presents unique safety concerns and consumes significant pharmacy staff time and costs. Robotic antineoplastic and adjuvant medication compounding may provide incremental safety and efficiency advantages compared with standard pharmacy practices. Methods: We conducted a direct observation trial in an academic medical center pharmacy to compare the effects of usual/manual antineoplastic and adjuvant drug preparation (baseline period) with robotic preparation (intervention period). The primary outcomes were serious medication errors and staff safety events with the potential for harm of patients and staff, respectively. Secondary outcomes included medication accuracy determined by gravimetric techniques, medication preparation time, and the costs of both ancillary materials used during drug preparation and personnel time. Results: Among 1,421 and 972 observed medication preparations, we found nine (0.7%) and seven (0.7%) serious medication errors (P = .8) and 73 (5.1%) and 28 (2.9%) staff safety events (P = .007) in the baseline and intervention periods, respectively. Drugs failed accuracy measurements in 12.5% (23 of 184) and 0.9% (one of 110) of preparations in the baseline and intervention periods, respectively (P < .001). Mean drug preparation time increased by 47% when using the robot (P = .009). Labor costs were similar in both study periods, although the ancillary material costs decreased by 56% in the intervention period (P < .001). Conclusion: Although robotically prepared antineoplastic and adjuvant medications did not reduce serious medication errors, both staff safety and accuracy of medication preparation were improved significantly. Future studies are necessary to address the overall cost effectiveness of these robotic implementations. PMID:23598843
NASA Astrophysics Data System (ADS)
Soto-López, Carlos D.; Meixner, Thomas; Ferré, Ty P. A.
2011-12-01
From its inception in the mid-1960s, the use of temperature time series (thermographs) to estimate vertical fluxes has found increasing use in the hydrologic community. Beginning in 2000, researchers have examined the impacts of measurement and parameter uncertainty on the estimates of vertical fluxes. To date, the effects of temperature measurement discretization (resolution), a characteristic of all digital temperature loggers, on the determination of vertical fluxes has not been considered. In this technical note we expand the analysis of recently published work to include the effects of temperature measurement resolution on estimates of vertical fluxes using temperature amplitude and phase shift information. We show that errors in thermal front velocity estimation introduced by discretizing thermographs differ when amplitude or phase shift data are used to estimate vertical fluxes. We also show that under similar circumstances sensor resolution limits the range over which vertical velocities are accurately reproduced more than uncertainty in temperature measurements, uncertainty in sensor separation distance, and uncertainty in the thermal diffusivity combined. These effects represent the baseline error present and thus the best-case scenario when discrete temperature measurements are used to infer vertical fluxes. The errors associated with measurement resolution can be minimized by using the highest-resolution sensors available. But thoughtful experimental design could allow users to select the most cost-effective temperature sensors to fit their measurement needs.
Artificial Vector Calibration Method for Differencing Magnetic Gradient Tensor Systems
Li, Zhining; Zhang, Yingtang; Yin, Gang
2018-01-01
The measurement error of the differencing (i.e., using two homogenous field sensors at a known baseline distance) magnetic gradient tensor system includes the biases, scale factors, nonorthogonality of the single magnetic sensor, and the misalignment error between the sensor arrays, all of which can severely affect the measurement accuracy. In this paper, we propose a low-cost artificial vector calibration method for the tensor system. Firstly, the error parameter linear equations are constructed based on the single-sensor’s system error model to obtain the artificial ideal vector output of the platform, with the total magnetic intensity (TMI) scalar as a reference by two nonlinear conversions, without any mathematical simplification. Secondly, the Levenberg–Marquardt algorithm is used to compute the integrated model of the 12 error parameters by nonlinear least-squares fitting method with the artificial vector output as a reference, and a total of 48 parameters of the system is estimated simultaneously. The calibrated system outputs along the reference platform-orthogonal coordinate system. The analysis results show that the artificial vector calibrated output can track the orientation fluctuations of TMI accurately, effectively avoiding the “overcalibration” problem. The accuracy of the error parameters’ estimation in the simulation is close to 100%. The experimental root-mean-square error (RMSE) of the TMI and tensor components is less than 3 nT and 20 nT/m, respectively, and the estimation of the parameters is highly robust. PMID:29373544
Muon Energy Calibration of the MINOS Detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyagawa, Paul S.
MINOS is a long-baseline neutrino oscillation experiment designed to search for conclusive evidence of neutrino oscillations and to measure the oscillation parameters precisely. MINOS comprises two iron tracking calorimeters located at Fermilab and Soudan. The Calibration Detector at CERN is a third MINOS detector used as part of the detector response calibration programme. A correct energy calibration between these detectors is crucial for the accurate measurement of oscillation parameters. This thesis presents a calibration developed to produce a uniform response within a detector using cosmic muons. Reconstruction of tracks in cosmic ray data is discussed. This data is utilized tomore » calculate calibration constants for each readout channel of the Calibration Detector. These constants have an average statistical error of 1.8%. The consistency of the constants is demonstrated both within a single run and between runs separated by a few days. Results are presented from applying the calibration to test beam particles measured by the Calibration Detector. The responses are calibrated to within 1.8% systematic error. The potential impact of the calibration on the measurement of oscillation parameters by MINOS is also investigated. Applying the calibration reduces the errors in the measured parameters by ~ 10%, which is equivalent to increasing the amount of data by 20%.« less
NASA Technical Reports Server (NTRS)
Koblinsky, C. J.; Ryan, J.; Braatz, L.; Klosko, S. M.
1993-01-01
The overall accuracy of the U.S. Navy Geosat altimeter wet atmospheric range delay caused by refraction through the atmosphere is directly assessed by comparing the estimates made from the DMSP Special Sensor Microwave/Imager and the U.S. Navy Fleet Numerical Ocean Center forecast model for Geosat with measurements of total zenith columnar water vapor content from four VLBI sites. The assessment is made by comparing time series of range delay from various methods at each location. To determine the importance of diurnal variation in water vapor content in noncoincident estimates, the VLBI measurements were made at 15-min intervals over a few days. The VLBI measurements showed strong diurnal variations in columnar water vapor at several sites, causing errors of the order 3 cm rms in any noncoincident measurement of the wet troposphere range delay. These errors have an effect on studies of annual and interannual changes in sea level with Geosat data.
Return of Postural Control to Baseline After Anaerobic and Aerobic Exercise Protocols
Fox, Zachary G; Mihalik, Jason P; Blackburn, J Troy; Battaglini, Claudio L; Guskiewicz, Kevin M
2008-01-01
Context: With regard to sideline concussion testing, the effect of fatigue associated with different types of exercise on postural control is unknown. Objective: To evaluate the effects of fatigue on postural control in healthy college-aged athletes performing anaerobic and aerobic exercise protocols and to establish an immediate recovery time course from each exercise protocol for postural control measures to return to baseline status. Design: Counterbalanced, repeated measures. Setting: Research laboratory. Patients Or Other Participants: Thirty-six collegiate athletes (18 males, 18 females; age = 19.00 ± 1.01 years, height = 172.44 ± 10.47 cm, mass = 69.72 ± 12.84 kg). Intervention(s): Participants completed 2 counterbalanced sessions within 7 days. Each session consisted of 1 exercise protocol followed by postexercise measures of postural control taken at 3-, 8-, 13-, and 18-minute time intervals. Baseline measures were established during the first session, before the specified exertion protocol was performed. Main Outcome Measure(s): Balance Error Scoring System (BESS) results, sway velocity, and elliptical sway area. Results: We found a decrease in postural control after each exercise protocol for all dependent measures. An interaction was noted between exercise protocol and time for total BESS score (P = .002). For both exercise protocols, all measures of postural control returned to baseline within 13 minutes. Conclusions: Postural control was negatively affected after anaerobic and aerobic exercise protocols as measured by total BESS score, elliptical sway area, and sway velocity. The effect of exertion lasted up to 13 minutes after each exercise was completed. Certified athletic trainers and clinicians should be aware of these effects and their recovery time course when determining an appropriate time to administer sideline assessments of postural control after a suspected mild traumatic brain injury. PMID:18833307
Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach
NASA Astrophysics Data System (ADS)
Bähr, Hermann; Hanssen, Ramon F.
2012-12-01
An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.
The Effect of Antenna Position Errors on Redundant-Baseline Calibration of HERA
NASA Astrophysics Data System (ADS)
Orosz, Naomi; Dillon, Joshua; Ewall-Wice, Aaron; Parsons, Aaron; HERA Collaboration
2018-01-01
HERA (the Hydrogen Epoch of Reionization Array) is a large, highly-redundant radio interferometer in South Africa currently being built out to 350 14-m dishes. Its mission is to probe large scale structure during and prior to the epoch of reionization using the 21 cm hyperfine transition of neutral hydrogen. The array is designed to be calibrated using redundant baselines of known lengths. However, the dishes can deviate from ideal positions, with errors on the order of a few centimeters. This potentially increases foreground contamination of the 21 cm power spectrum in the cleanest part of Fourier space. The calibration algorithm treats groups of baselines that should be redundant, but are not due to position errors, as if they actually are. Accurate, precise calibration is critical because the foreground signals are 100,000 times stronger than the reionization signal. We explain the origin of this effect and discuss weighting strategies to mitigate it.
Gong, Ang; Zhao, Xiubin; Pang, Chunlei; Duan, Rong; Wang, Yong
2015-12-02
For Global Navigation Satellite System (GNSS) single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA) method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.
Kado, DM; Huang, MH; Karlamangla, AS; Cawthon, P; Katzman, W; Hillier, TA; Ensrud, K; Cummings, SR
2012-01-01
Age-related hyperkyphosis is thought to be a result of underlying vertebral fractures, but studies suggest that among the most hyperkyphotic women, only one in three have underlying radiographic vertebral fractures. Although commonly observed, there is no widely accepted definition of hyperkyphosis in older persons, and other than vertebral fracture, no major causes have been identified. To identify important correlates of kyphosis and risk factors for its progression over time, we conducted a 15 year retrospective cohort study of 1,196 women, aged 65 years and older at baseline (1986–88), from four communities across the United States: Baltimore County, MD; Minneapolis, MN, Portland, Oregon, and the Monongahela Valley, PA. Cobb angle kyphosis was measured from radiographs obtained at baseline and an average of 3.7 and 15 years later. Repeated measures, mixed effects analyses were performed. At baseline, the mean kyphosis angle was 44.7 degrees (standard error 0.4, standard deviation 11.9) and significant correlates included a family history of hyperkyphosis, prevalent vertebral fracture, low bone mineral density, greater body weight, degenerative disc disease, and smoking. Over an average of 15 years, the mean increase in kyphosis was 7.1 degrees (standard error 0.25). Independent determinants of greater kyphosis progression were prevalent and incident vertebral fractures, low bone mineral density and concurrent bone density loss, low body weight, and concurrent weight loss. Thus, age-related kyphosis progression may be best prevented by slowing bone density loss and avoiding weight loss. PMID:22865329
Stripe-PZT Sensor-Based Baseline-Free Crack Diagnosis in a Structure with a Welded Stiffener.
An, Yun-Kyu; Shen, Zhiqi; Wu, Zhishen
2016-09-16
This paper proposes a stripe-PZT sensor-based baseline-free crack diagnosis technique in the heat affected zone (HAZ) of a structure with a welded stiffener. The proposed technique enables one to identify and localize a crack in the HAZ using only current data measured using a stripe-PZT sensor. The use of the stripe-PZT sensor makes it possible to significantly improve the applicability to real structures and minimize man-made errors associated with the installation process by embedding multiple piezoelectric sensors onto a printed circuit board. Moreover, a new frequency-wavenumber analysis-based baseline-free crack diagnosis algorithm minimizes false alarms caused by environmental variations by avoiding simple comparison with the baseline data accumulated from the pristine condition of a target structure. The proposed technique is numerically as well as experimentally validated using a plate-like structure with a welded stiffener, reveling that it successfully identifies and localizes a crack in HAZ.
Stripe-PZT Sensor-Based Baseline-Free Crack Diagnosis in a Structure with a Welded Stiffener
An, Yun-Kyu; Shen, Zhiqi; Wu, Zhishen
2016-01-01
This paper proposes a stripe-PZT sensor-based baseline-free crack diagnosis technique in the heat affected zone (HAZ) of a structure with a welded stiffener. The proposed technique enables one to identify and localize a crack in the HAZ using only current data measured using a stripe-PZT sensor. The use of the stripe-PZT sensor makes it possible to significantly improve the applicability to real structures and minimize man-made errors associated with the installation process by embedding multiple piezoelectric sensors onto a printed circuit board. Moreover, a new frequency-wavenumber analysis-based baseline-free crack diagnosis algorithm minimizes false alarms caused by environmental variations by avoiding simple comparison with the baseline data accumulated from the pristine condition of a target structure. The proposed technique is numerically as well as experimentally validated using a plate-like structure with a welded stiffener, reveling that it successfully identifies and localizes a crack in HAZ. PMID:27649200
Medication safety initiative in reducing medication errors.
Nguyen, Elisa E; Connolly, Phyllis M; Wong, Vivian
2010-01-01
The purpose of the study was to evaluate whether a Medication Pass Time Out initiative was effective and sustainable in reducing medication administration errors. A retrospective descriptive method was used for this research, where a structured Medication Pass Time Out program was implemented following staff and physician education. As a result, the rate of interruptions during the medication administration process decreased from 81% to 0. From the observations at baseline, 6 months, and 1 year after implementation, the percent of doses of medication administered without interruption improved from 81% to 99%. Medication doses administered without errors at baseline, 6 months, and 1 year improved from 98% to 100%.
NASA Technical Reports Server (NTRS)
Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette
2009-01-01
Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.
Earth Orientation Effects on Mobile VLBI Baselines
NASA Technical Reports Server (NTRS)
Allen, S. L.
1984-01-01
Improvements in data quality for the mobile VLBI systems have placed higher accuracy requirements on Earth orientation calibrations. Errors in these calibrations may give rise to systematic effects in the nonlength components of the baselines. Various sources of Earth orientation data were investigated for calibration of Mobile VLBI baselines. Significant differences in quality between the several available sources of UT1-UTC were found. It was shown that the JPL Kalman filtered space technology data were at least as good as any other and adequate to the needs of current Mobile VLBI systems and observing plans. For polar motion, the values from all service suffice. The effect of Earth orientation errors on the accuracy of differenced baselines was also investigated. It is shown that the effect is negligible for the current mobile systems and observing plan.
Linzer, Mark; Poplau, Sara; Brown, Roger; Grossman, Ellie; Varkey, Anita; Yale, Steven; Williams, Eric S; Hicks, Lanis; Wallock, Jill; Kohnhorst, Diane; Barbouche, Michael
2017-01-01
While primary care work conditions are associated with adverse clinician outcomes, little is known about the effect of work condition interventions on quality or safety. A cluster randomized controlled trial of 34 clinics in the upper Midwest and New York City. Primary care clinicians and their diabetic and hypertensive patients. Quality improvement projects to improve communication between providers, workflow design, and chronic disease management. Intervention clinics received brief summaries of their clinician and patient outcome data at baseline. We measured work conditions and clinician and patient outcomes both at baseline and 6-12 months post-intervention. Multilevel regression analyses assessed the impact of work condition changes on outcomes. Subgroup analyses assessed impact by intervention category. There were no significant differences in error reduction (19 % vs. 11 %, OR of improvement 1.84, 95 % CI 0.70, 4.82, p = 0.21) or quality of care improvement (19 % improved vs. 44 %, OR 0.62, 95 % CI 0.58, 1.21, p = 0.42) between intervention and control clinics. The conceptual model linking work conditions, provider outcomes, and error reduction showed significant relationships between work conditions and provider outcomes (p ≤ 0.001) and a trend toward a reduced error rate in providers with lower burnout (OR 1.44, 95 % CI 0.94, 2.23, p = 0.09). Few quality metrics, short time span, fewer clinicians recruited than anticipated. Work-life interventions improving clinician satisfaction and well-being do not necessarily reduce errors or improve quality. Longer, more focused interventions may be needed to produce meaningful improvements in patient care. ClinicalTrials.gov # NCT02542995.
Neural control of blood pressure in women: differences according to age
Peinado, Ana B.; Harvey, Ronee E.; Hart, Emma C.; Charkoudian, Nisha; Curry, Timothy B.; Nicholson, Wayne T.; Wallin, B. Gunnar; Joyner, Michael J.; Barnes, Jill N.
2017-01-01
Purpose The blood pressure “error signal” represents the difference between an individual’s mean diastolic blood pressure and the diastolic blood pressure at which 50% of cardiac cycles are associated with a muscle sympathetic nerve activity burst (the “T50”). In this study we evaluated whether T50 and the error signal related to the extent of change in blood pressure during autonomic blockade in young and older women, to study potential differences in sympathetic neural mechanisms regulating blood pressure before and after menopause. Methods We measured muscle sympathetic nerve activity and blood pressure in 12 premenopausal (25±1 years) and 12 postmenopausal women (61±2 years) before and during complete autonomic blockade with trimethaphan camsylate. Results At baseline, young women had a negative error signal (−8±1 versus 2±1 mmHg, p<0.001; respectively) and lower muscle sympathetic nerve activity (15±1 versus 33±3 bursts/min, p<0.001; respectively) than older women. The change in diastolic blood pressure after autonomic blockade was associated with baseline T50 in older women (r=−0.725, p=0.008) but not in young women (r=−0.337, p=0.29). Women with the most negative error signal had the lowest muscle sympathetic nerve activity in both groups (young: r=0.886, p<0.001; older: r=0.870, p<0.001). Conclusions Our results suggest that there are differences in baroreflex control of muscle sympathetic nerve activity between young and older women, using the T50 and error signal analysis. This approach provides further information on autonomic control of blood pressure in women. PMID:28205011
NASA Astrophysics Data System (ADS)
Schreiber, K. Ulrich; Kodet, Jan
2018-02-01
Highly precise time and stable reference frequencies are fundamental requirements for space geodesy. Satellite laser ranging (SLR) is one of these techniques, which differs from all other applications like Very Long Baseline Interferometry (VLBI), Global Navigation Satellite Systems (GNSS) and finally Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS) by the fact that it is an optical two-way measurement technique. That means that there is no need for a clock synchronization process between both ends of the distance covered by the measurement technique. Under the assumption of isotropy for the speed of light, SLR establishes the only practical realization of the Einstein Synchronization process so far. Therefore it is a powerful time transfer technique. However, in order to transfer time between two remote clocks, it is also necessary to tightly control all possible signal delays in the ranging process. This paper discusses the role of time and frequency in SLR as well as the error sources before it address the transfer of time between ground and space. The need of an improved signal delay control led to a major redesign of the local time and frequency distribution at the Geodetic Observatory Wettzell. Closure measurements can now be used to identify and remove systematic errors in SLR measurements.
Goya, Thiago T; Silva, Rosyvaldo F; Guerra, Renan S; Lima, Marta F; Barbosa, Eline R F; Cunha, Paulo Jannuzzi; Lobo, Denise M L; Buchpiguel, Carlos A; Busatto-Filho, Geraldo; Negrão, Carlos E; Lorenzi-Filho, Geraldo; Ueno-Pardi, Linda M
2016-01-01
To investigate muscle sympathetic nerve activity (MSNA) response and executive performance during mental stress in obstructive sleep apnea (OSA). Individuals with no other comorbidities (age = 52 ± 1 y, body mass index = 29 ± 0.4, kg/m2) were divided into two groups: (1) control (n = 15) and (2) untreated OSA (n = 20) defined by polysomnography. Mini-Mental State of Examination (MMSE) and Inteligence quocient (IQ) were assessed. Heart rate (HR), blood pressure (BP), and MSNA (microneurography) were measured at baseline and during 3 min of the Stroop Color Word Test (SCWT). Sustained attention and inhibitory control were assessed by the number of correct answers and errors during SCWT. Control and OSA groups (apnea-hypopnea index, AHI = 8 ± 1 and 47 ± 1 events/h, respectively) were similar in age, MMSE, and IQ. Baseline HR and BP were similar and increased similarly during SCWT in control and OSA groups. In contrast, baseline MSNA was higher in OSA compared to controls. Moreover, MSNA significantly increased in the third minute of SCWT in OSA, but remained unchanged in controls (P < 0.05). The number of correct answers was lower and the number of errors was significantly higher during the second and third minutes of SCWT in the OSA group (P < 0.05). There was a significant correlation (P < 0.01) between the number of errors in the third minute of SCWT with AHI (r = 0.59), arousal index (r = 0.55), and minimum O2 saturation (r = -0.57). As compared to controls, MSNA is increased in patients with OSA at rest, and further significant MSNA increments and worse executive performance are seen during mental stress. URL: http://www.clinicaltrials.gov, registration number: NCT002289625. © 2016 Associated Professional Sleep Societies, LLC.
Zhang, Qiuzhao; Yang, Wei; Zhang, Shubi; Liu, Xin
2018-01-12
Global Navigation Satellite System (GNSS) carrier phase measurement for short baseline meets the requirements of deformation monitoring of large structures. However, the carrier phase multipath effect is the main error source with double difference (DD) processing. There are lots of methods to deal with the multipath errors of Global Position System (GPS) carrier phase data. The BeiDou navigation satellite System (BDS) multipath mitigation is still a research hotspot because the unique constellation design of BDS makes it different to mitigate multipath effects compared to GPS. Multipath error periodically repeats for its strong correlation to geometry of satellites, reflective surface and antenna which is also repetitive. We analyzed the characteristics of orbital periods of BDS satellites which are consistent with multipath repeat periods of corresponding satellites. The results show that the orbital periods and multipath periods for BDS geostationary earth orbit (GEO) and inclined geosynchronous orbit (IGSO) satellites are about one day but the periods of MEO satellites are about seven days. The Kalman filter (KF) and Rauch-Tung-Striebel Smoother (RTSS) was introduced to extract the multipath models from single difference (SD) residuals with traditional sidereal filter (SF). Wavelet filter and Empirical mode decomposition (EMD) were also used to mitigate multipath effects. The experimental results show that the three filters methods all have obvious effect on improvement of baseline accuracy and the performance of KT-RTSS method is slightly better than that of wavelet filter and EMD filter. The baseline vector accuracy on east, north and up (E, N, U) components with KF-RTSS method were improved by 62.8%, 63.6%, 62.5% on day of year 280 and 57.3%, 53.4%, 55.9% on day of year 281, respectively.
Return of postural control to baseline after anaerobic and aerobic exercise protocols.
Fox, Zachary G; Mihalik, Jason P; Blackburn, J Troy; Battaglini, Claudio L; Guskiewicz, Kevin M
2008-01-01
With regard to sideline concussion testing, the effect of fatigue associated with different types of exercise on postural control is unknown. To evaluate the effects of fatigue on postural control in healthy college-aged athletes performing anaerobic and aerobic exercise protocols and to establish an immediate recovery time course from each exercise protocol for postural control measures to return to baseline status. Counterbalanced, repeated measures. Research laboratory. Thirty-six collegiate athletes (18 males, 18 females; age = 19.00 +/- 1.01 years, height = 172.44 +/- 10.47 cm, mass = 69.72 +/- 12.84 kg). Participants completed 2 counterbalanced sessions within 7 days. Each session consisted of 1 exercise protocol followed by postexercise measures of postural control taken at 3-, 8-, 13-, and 18-minute time intervals. Baseline measures were established during the first session, before the specified exertion protocol was performed. Balance Error Scoring System (BESS) results, sway velocity, and elliptical sway area. We found a decrease in postural control after each exercise protocol for all dependent measures. An interaction was noted between exercise protocol and time for total BESS score (P = .002). For both exercise protocols, all measures of postural control returned to baseline within 13 minutes. Postural control was negatively affected after anaerobic and aerobic exercise protocols as measured by total BESS score, elliptical sway area, and sway velocity. The effect of exertion lasted up to 13 minutes after each exercise was completed. Certified athletic trainers and clinicians should be aware of these effects and their recovery time course when determining an appropriate time to administer sideline assessments of postural control after a suspected mild traumatic brain injury.
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2013 CFR
2013-10-01
....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2014 CFR
2014-10-01
....102 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2012 CFR
2012-10-01
....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2011 CFR
2011-10-01
....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...
NASA Technical Reports Server (NTRS)
Ulvestad, J. S.
1989-01-01
Errors from a number of sources in astrometric very long baseline interferometry (VLBI) have been reduced in recent years through a variety of methods of calibration and modeling. Such reductions have led to a situation in which the extended structure of the natural radio sources used in VLBI is a significant error source in the effort to improve the accuracy of the radio reference frame. In the past, work has been done on individual radio sources to establish the magnitude of the errors caused by their particular structures. The results of calculations on 26 radio sources are reported in which an effort is made to determine the typical delay and delay-rate errors for a number of sources having different types of structure. It is found that for single observations of the types of radio sources present in astrometric catalogs, group-delay and phase-delay scatter in the 50 to 100 psec range due to source structure can be expected at 8.4 GHz on the intercontinental baselines available in the Deep Space Network (DSN). Delay-rate scatter of approx. 5 x 10(exp -15) sec sec(exp -1) (or approx. 0.002 mm sec (exp -1) is also expected. If such errors mapped directly into source position errors, they would correspond to position uncertainties of approx. 2 to 5 nrad, similar to the best position determinations in the current JPL VLBI catalog. With the advent of wider bandwidth VLBI systems on the large DSN antennas, the system noise will be low enough so that the structure-induced errors will be a significant part of the error budget. Several possibilities for reducing the structure errors are discussed briefly, although it is likely that considerable effort will have to be devoted to the structure problem in order to reduce the typical error by a factor of two or more.
Translating Radiometric Requirements for Satellite Sensors to Match International Standards.
Pearlman, Aaron; Datla, Raju; Kacker, Raghu; Cao, Changyong
2014-01-01
International scientific standards organizations created standards on evaluating uncertainty in the early 1990s. Although scientists from many fields use these standards, they are not consistently implemented in the remote sensing community, where traditional error analysis framework persists. For a satellite instrument under development, this can create confusion in showing whether requirements are met. We aim to create a methodology for translating requirements from the error analysis framework to the modern uncertainty approach using the product level requirements of the Advanced Baseline Imager (ABI) that will fly on the Geostationary Operational Environmental Satellite R-Series (GOES-R). In this paper we prescribe a method to combine several measurement performance requirements, written using a traditional error analysis framework, into a single specification using the propagation of uncertainties formula. By using this approach, scientists can communicate requirements in a consistent uncertainty framework leading to uniform interpretation throughout the development and operation of any satellite instrument.
Translating Radiometric Requirements for Satellite Sensors to Match International Standards
Pearlman, Aaron; Datla, Raju; Kacker, Raghu; Cao, Changyong
2014-01-01
International scientific standards organizations created standards on evaluating uncertainty in the early 1990s. Although scientists from many fields use these standards, they are not consistently implemented in the remote sensing community, where traditional error analysis framework persists. For a satellite instrument under development, this can create confusion in showing whether requirements are met. We aim to create a methodology for translating requirements from the error analysis framework to the modern uncertainty approach using the product level requirements of the Advanced Baseline Imager (ABI) that will fly on the Geostationary Operational Environmental Satellite R-Series (GOES-R). In this paper we prescribe a method to combine several measurement performance requirements, written using a traditional error analysis framework, into a single specification using the propagation of uncertainties formula. By using this approach, scientists can communicate requirements in a consistent uncertainty framework leading to uniform interpretation throughout the development and operation of any satellite instrument. PMID:26601032
Methods for multiple-telescope beam imaging and guiding in the near-infrared
NASA Astrophysics Data System (ADS)
Anugu, N.; Amorim, A.; Gordo, P.; Eisenhauer, F.; Pfuhl, O.; Haug, M.; Wieprecht, E.; Wiezorrek, E.; Lima, J.; Perrin, G.; Brandner, W.; Straubmeier, C.; Le Bouquin, J.-B.; Garcia, P. J. V.
2018-05-01
Atmospheric turbulence and precise measurement of the astrometric baseline vector between any two telescopes are two major challenges in implementing phase-referenced interferometric astrometry and imaging. They limit the performance of a fibre-fed interferometer by degrading the instrument sensitivity and the precision of astrometric measurements and by introducing image reconstruction errors due to inaccurate phases. A multiple-beam acquisition and guiding camera was built to meet these challenges for a recently commissioned four-beam combiner instrument, GRAVITY, at the European Southern Observatory Very Large Telescope Interferometer. For each telescope beam, it measures (a) field tip-tilts by imaging stars in the sky, (b) telescope pupil shifts by imaging pupil reference laser beacons installed on each telescope using a 2 × 2 lenslet and (c) higher-order aberrations using a 9 × 9 Shack-Hartmann. The telescope pupils are imaged to provide visual monitoring while observing. These measurements enable active field and pupil guiding by actuating a train of tip-tilt mirrors placed in the pupil and field planes, respectively. The Shack-Hartmann measured quasi-static aberrations are used to focus the auxiliary telescopes and allow the possibility of correcting the non-common path errors between the adaptive optics systems of the unit telescopes and GRAVITY. The guiding stabilizes the light injection into single-mode fibres, increasing sensitivity and reducing the astrometric and image reconstruction errors. The beam guiding enables us to achieve an astrometric error of less than 50 μas. Here, we report on the data reduction methods and laboratory tests of the multiple-beam acquisition and guiding camera and its performance on-sky.
Repeated readings and science: Fluency with expository passages
NASA Astrophysics Data System (ADS)
Kostewicz, Douglas E.
The current study investigated the effects of repeated readings to a fluency criterion (RRFC) for seven students with disabilities using science text. The study employed a single subject design, specifically, two multiple probe multiple baselines across subjects, to evaluate the effects of the RRFC intervention. Results indicated that students met criterion (200 or more correct words per minute with 2 or fewer errors) on four consecutive passages. A majority of students displayed accelerations to correct words per minute and decelerations to incorrect words per minute on successive initial, intervention readings suggesting reading transfer. Students' reading scores during posttest and maintenance out performed pre-test and baseline readings provided additional measures of reading transfer. For a relationship to comprehension, students scored higher on oral retell measures after meeting criterion as compared to initial readings. Overall, the research findings suggested that the RRFC intervention improves science reading fluency for students with disabilities, and may also indirectly benefit comprehension.
Bias error reduction using ratios to baseline experiments. Heat transfer case study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakroun, W.; Taylor, R.P.; Coleman, H.W.
1993-10-01
Employing a set of experiments devoted to examining the effect of surface finish (riblets) on convective heat transfer as an example, this technical note seeks to explore the notion that precision uncertainties in experiments can be reduced by repeated trials and averaging. This scheme for bias error reduction can give considerable advantage when parametric effects are investigated experimentally. When the results of an experiment are presented as a ratio with the baseline results, a large reduction in the overall uncertainty can be achieved when all the bias limits in the variables of the experimental result are fully correlated with thosemore » of the baseline case. 4 refs.« less
Global and regional kinematics with GPS
NASA Technical Reports Server (NTRS)
King, Robert W.
1994-01-01
The inherent precision of the doubly differenced phase measurement and the low cost of instrumentation made GPS the space geodetic technique of choice for regional surveys as soon as the constellation reached acceptable geometry in the area of interest: 1985 in western North America, the early 1990's in most of the world. Instrument and site-related errors for horizontal positioning are usually less than 3 mm, so that the dominant source of error is uncertainty in the reference frame defined by the satellites orbits and the tracking stations used to determine them. Prior to about 1992, when the tracking network for most experiments was globally sparse, the number of fiducial sites or the level at which they could be tied to an SLR or VLBI reference frame usually, set the accuracy limit. Recently, with a global network of over 30 stations, the limit is set more often by deficiencies in models for non-gravitational forces acting on the satellites. For regional networks in the northern hemisphere, reference frame errors are currently about 3 parts per billion (ppb) in horizontal position, allowing centimeter-level accuracies over intercontinental distances and less than 1 mm for a 100 km baseline. The accuracy of GPS measurements for monitoring height variations is generally 2-3 times worse than for horizontal motions. As for VLBI, the primary source of error is unmodeled fluctuations in atmospheric water vapor, but both reference frame uncertainties and some instrument errors are more serious for vertical than horizontal measurements. Under good conditions, daily repeatabilities at the level of 10 mm rms were achieved. This paper will summarize the current accuracy of GPS measurements and their implication for the use of SLR to study regional kinematics.
Accounting for dropout bias using mixed-effects models.
Mallinckrodt, C H; Clark, W S; David, S R
2001-01-01
Treatment effects are often evaluated by comparing change over time in outcome measures. However, valid analyses of longitudinal data can be problematic when subjects discontinue (dropout) prior to completing the study. This study assessed the merits of likelihood-based repeated measures analyses (MMRM) compared with fixed-effects analysis of variance where missing values were imputed using the last observation carried forward approach (LOCF) in accounting for dropout bias. Comparisons were made in simulated data and in data from a randomized clinical trial. Subject dropout was introduced in the simulated data to generate ignorable and nonignorable missingness. Estimates of treatment group differences in mean change from baseline to endpoint from MMRM were, on average, markedly closer to the true value than estimates from LOCF in every scenario simulated. Standard errors and confidence intervals from MMRM accurately reflected the uncertainty of the estimates, whereas standard errors and confidence intervals from LOCF underestimated uncertainty.
Systematic errors in long baseline oscillation experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, Deborah A.; /Fermilab
This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.
Furlan, Leonardo; Sterr, Annette
2018-01-01
Motor learning studies face the challenge of differentiating between real changes in performance and random measurement error. While the traditional p -value-based analyses of difference (e.g., t -tests, ANOVAs) provide information on the statistical significance of a reported change in performance scores, they do not inform as to the likely cause or origin of that change, that is, the contribution of both real modifications in performance and random measurement error to the reported change. One way of differentiating between real change and random measurement error is through the utilization of the statistics of standard error of measurement (SEM) and minimal detectable change (MDC). SEM is estimated from the standard deviation of a sample of scores at baseline and a test-retest reliability index of the measurement instrument or test employed. MDC, in turn, is estimated from SEM and a degree of confidence, usually 95%. The MDC value might be regarded as the minimum amount of change that needs to be observed for it to be considered a real change, or a change to which the contribution of real modifications in performance is likely to be greater than that of random measurement error. A computer-based motor task was designed to illustrate the applicability of SEM and MDC to motor learning research. Two studies were conducted with healthy participants. Study 1 assessed the test-retest reliability of the task and Study 2 consisted in a typical motor learning study, where participants practiced the task for five consecutive days. In Study 2, the data were analyzed with a traditional p -value-based analysis of difference (ANOVA) and also with SEM and MDC. The findings showed good test-retest reliability for the task and that the p -value-based analysis alone identified statistically significant improvements in performance over time even when the observed changes could in fact have been smaller than the MDC and thereby caused mostly by random measurement error, as opposed to by learning. We suggest therefore that motor learning studies could complement their p -value-based analyses of difference with statistics such as SEM and MDC in order to inform as to the likely cause or origin of any reported changes in performance.
da Silva, F; Heuraux, S; Ricardo, E; Quental, P; Ferreira, J
2016-11-01
We conducted a first assessment of the measurement performance of the in-vessel components at gap 6 of the ITER plasma position reflectometry with the aid of a synthetic Ordinary Mode (O-mode) broadband frequency-modulated continuous-wave reflectometer implemented with REFMUL, a 2D finite-difference time-domain full-wave Maxwell code. These simulations take into account the system location within the vacuum vessel as well as its access to the plasma. The plasma case considered is a baseline scenario from Fusion for Energy. We concluded that for the analyzed scenario, (i) the plasma curvature and non-equatorial position of the antenna have neglectable impact on the measurements; (ii) the cavity-like space surrounding the antenna can cause deflection and splitting of the probing beam; and (iii) multi-reflections on the blanket wall cause a substantial error preventing the system from operating within the required error margin.
ANCOVA Versus CHANGE From Baseline in Nonrandomized Studies: The Difference.
van Breukelen, Gerard J P
2013-11-01
The pretest-posttest control group design can be analyzed with the posttest as dependent variable and the pretest as covariate (ANCOVA) or with the difference between posttest and pretest as dependent variable (CHANGE). These 2 methods can give contradictory results if groups differ at pretest, a phenomenon that is known as Lord's paradox. Literature claims that ANCOVA is preferable if treatment assignment is based on randomization or on the pretest and questionable for preexisting groups. Some literature suggests that Lord's paradox has to do with measurement error in the pretest. This article shows two new things: First, the claims are confirmed by proving the mathematical equivalence of ANCOVA to a repeated measures model without group effect at pretest. Second, correction for measurement error in the pretest is shown to lead back to ANCOVA or to CHANGE, depending on the assumed absence or presence of a true group difference at pretest. These two new theoretical results are illustrated with multilevel (mixed) regression and structural equation modeling of data from two studies.
NASA Astrophysics Data System (ADS)
De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.
2016-05-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99. 73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.
NASA Technical Reports Server (NTRS)
DeLuccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.
2016-01-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.
The Pearson-Readhead Survey of Compact Extragalactic Radio Sources from Space. I. The Images
NASA Astrophysics Data System (ADS)
Lister, M. L.; Tingay, S. J.; Murphy, D. W.; Piner, B. G.; Jones, D. L.; Preston, R. A.
2001-06-01
We present images from a space-VLBI survey using the facilities of the VLBI Space Observatory Programme (VSOP), drawing our sample from the well-studied Pearson-Readhead survey of extragalactic radio sources. Our survey has taken advantage of long space-VLBI baselines and large arrays of ground antennas, such as the Very Long Baseline Array and European VLBI Network, to obtain high-resolution images of 27 active galactic nuclei and to measure the core brightness temperatures of these sources more accurately than is possible from the ground. A detailed analysis of the source properties is given in accompanying papers. We have also performed an extensive series of simulations to investigate the errors in VSOP images caused by the relatively large holes in the (u,v)-plane when sources are observed near the orbit normal direction. We find that while the nominal dynamic range (defined as the ratio of map peak to off-source error) often exceeds 1000:1, the true dynamic range (map peak to on-source error) is only about 30:1 for relatively complex core-jet sources. For sources dominated by a strong point source, this value rises to approximately 100:1. We find the true dynamic range to be a relatively weak function of the difference in position angle (P.A.) between the jet P.A. and u-v coverage major axis P.A. For regions with low signal-to-noise ratios, typically located down the jet away from the core, large errors can occur, causing spurious features in VSOP images that should be interpreted with caution.
NASA Technical Reports Server (NTRS)
De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.
2016-01-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24-hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24-hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.
Park, Subin; Kim, Jae-Won; Yang, Young-Hui; Hong, Soon-Beom; Park, Min-Hyeon; Kim, Boong-Nyun; Shin, Min-Sup; Yoo, Hee-Jeong; Cho, Soo-Churl
2012-05-16
Dysregulation of noradrenergic system may play important roles in pathophysiology of attention-deficit/hyperactivity disorder (ADHD). We examined the relationship between polymorphisms in the norepinephrine transporter SLC6A2 gene and attentional performance before and after medication in children with ADHD. Fifty-three medication-naïve children with ADHD were genotyped and evaluated using the continuous performance test (CPT). After 8-weeks of methylphenidate treatment, these children were evaluated by CPT again. We compared the baseline CPT measures and the post-treatment changes in the CPT measures based on the G1287A and the A-3081T polymorphisms of SLC6A2. There was no significant difference in the baseline CPT measures associated with the G1287A or A-3081T polymorphisms. After medication, however, ADHD subjects with the G/G genotype at the G1287A polymorphism showed a greater decrease in the mean omission error scores (p = 0.006) than subjects with the G/A or A/A genotypes, and subjects with the T allele at the A-3081T polymorphism (T/T or A/T) showed a greater decrease in the mean commission error scores (p = 0.003) than those with the A/A genotypes. Our results provide evidence for the possible role of the G1287A and A-3081T genotypes of SLC6A2 in methylphenidate-induced improvement in attentional performance and support the noradrenergic hypothesis for the pathophysiology of ADHD.
Meaningless comparisons lead to false optimism in medical machine learning
Kording, Konrad; Recht, Benjamin
2017-01-01
A new trend in medicine is the use of algorithms to analyze big datasets, e.g. using everything your phone measures about you for diagnostics or monitoring. However, these algorithms are commonly compared against weak baselines, which may contribute to excessive optimism. To assess how well an algorithm works, scientists typically ask how well its output correlates with medically assigned scores. Here we perform a meta-analysis to quantify how the literature evaluates their algorithms for monitoring mental wellbeing. We find that the bulk of the literature (∼77%) uses meaningless comparisons that ignore patient baseline state. For example, having an algorithm that uses phone data to diagnose mood disorders would be useful. However, it is possible to explain over 80% of the variance of some mood measures in the population by simply guessing that each patient has their own average mood—the patient-specific baseline. Thus, an algorithm that just predicts that our mood is like it usually is can explain the majority of variance, but is, obviously, entirely useless. Comparing to the wrong (population) baseline has a massive effect on the perceived quality of algorithms and produces baseless optimism in the field. To solve this problem we propose “user lift” that reduces these systematic errors in the evaluation of personalized medical monitoring. PMID:28949964
GPS Attitude Determination Using Deployable-Mounted Antennas
NASA Technical Reports Server (NTRS)
Osborne, Michael L.; Tolson, Robert H.
1996-01-01
The primary objective of this investigation is to develop a method to solve for spacecraft attitude in the presence of potential incomplete antenna deployment. Most research on the use of the Global Positioning System (GPS) in attitude determination has assumed that the antenna baselines are known to less than 5 centimeters, or one quarter of the GPS signal wavelength. However, if the GPS antennas are mounted on a deployable fixture such as a solar panel, the actual antenna positions will not necessarily be within 5 cm of nominal. Incomplete antenna deployment could cause the baselines to be grossly in error, perhaps by as much as a meter. Overcoming this large uncertainty in order to accurately determine attitude is the focus of this study. To this end, a two-step solution method is proposed. The first step uses a least-squares estimate of the baselines to geometrically calculate the deployment angle errors of the solar panels. For the spacecraft under investigation, the first step determines the baselines to 3-4 cm with 4-8 minutes of data. A Kalman filter is then used to complete the attitude determination process, resulting in typical attitude errors of 0.50.
Espe, Emil K S; Zhang, Lili; Sjaastad, Ivar
2014-10-01
Phase-contrast MRI (PC-MRI) is a versatile tool allowing evaluation of in vivo motion, but is sensitive to eddy current induced phase offsets, causing errors in the measured velocities. In high-resolution PC-MRI, these offsets can be sufficiently large to cause wrapping in the baseline phase, rendering conventional eddy current compensation (ECC) inadequate. The purpose of this study was to develop an improved ECC technique (unwrapping ECC) able to handle baseline phase discontinuities. Baseline phase discontinuities are unwrapped by minimizing the spatiotemporal standard deviation of the static-tissue phase. Computer simulations were used for demonstrating the theoretical foundation of the proposed technique. The presence of baseline wrapping was confirmed in high-resolution myocardial PC-MRI of a normal rat heart at 9.4 Tesla (T), and the performance of unwrapping ECC was compared with conventional ECC. Areas of phase wrapping in static regions were clearly evident in high-resolution PC-MRI. The proposed technique successfully eliminated discontinuities in the baseline, and resulted in significantly better ECC than the conventional approach. We report the occurrence of baseline phase wrapping in PC-MRI, and provide an improved ECC technique capable of handling its presence. Unwrapping ECC offers improved correction of eddy current induced baseline shifts in high-resolution PC-MRI. Copyright © 2013 Wiley Periodicals, Inc.
Uncertainty of InSAR velocity fields for measuring long-wavelength displacement
NASA Astrophysics Data System (ADS)
Fattahi, H.; Amelung, F.
2014-12-01
Long-wavelength artifacts in InSAR data are the main limitation to measure long-wavelength displacement; they are traditionally attributed mainly to the inaccuracy of the satellite orbits (orbital errors). However, most satellites are precisely tracked resulting in uncertainties of orbits of 2-10 cm. Orbits of these satellites are thus precise enough to obtain precise velocity fields with uncertainties better than 1 mm/yr/100 km for older satellites (e.g. Envisat) and better than 0.2 mm/yr/100 km for modern satellites (e.g. TerraSAR-X and Sentinel-1) [Fattahi & Amelung, 2014]. Such accurate velocity fields are achievable if long-wavelength artifacts from sources other than orbital errors are identified and corrected for. We present a modified Small Baseline approach to measure long-wavelength deformation and evaluate the uncertainty of these measurements. We use a redundant network of interferograms for detection and correction of unwrapping errors to ensure the unbiased estimation of phase history. We distinguish between different sources of long-wavelength artifacts and correct those introduced by atmospheric delay, topographic residuals, timing errors, processing approximations and hardware issues. We evaluate the uncertainty of the velocity fields using a covariance matrix with the contributions from orbital errors and residual atmospheric delay. For contributions from the orbital errors we consider the standard deviation of velocity gradients in range and azimuth directions as a function of orbital uncertainty. For contributions from the residual atmospheric delay we use several approaches including the structure functions of InSAR time-series epochs, the predicted delay from numerical weather models and estimated wet delay from optical imagery. We validate this InSAR approach for measuring long-wavelength deformation by comparing InSAR velocity fields over ~500 km long swath across the southern San Andreas fault system with independent GPS velocities and examine the estimated uncertainties in several non-deforming areas. We show the efficiency of the approach to study the continental deformation across the Chaman fault system at the western Indian plate boundary. Ref: Fattahi, H., & Amelung, F., (2014), InSAR uncertainty due to orbital errors, Geophys, J. Int (in press).
Dynamic performance of an aero-assist spacecraft - AFE
NASA Technical Reports Server (NTRS)
Chang, Ho-Pen; French, Raymond A.
1992-01-01
Dynamic performance of the Aero-assist Flight Experiment (AFE) spacecraft was investigated using a high-fidelity 6-DOF simulation model. Baseline guidance logic, control logic, and a strapdown navigation system to be used on the AFE spacecraft are also modeled in the 6-DOF simulation. During the AFE mission, uncertainties in the environment and the spacecraft are described by an error space which includes both correlated and uncorrelated error sources. The principal error sources modeled in this study include navigation errors, initial state vector errors, atmospheric variations, aerodynamic uncertainties, center-of-gravity off-sets, and weight uncertainties. The impact of the perturbations on the spacecraft performance is investigated using Monte Carlo repetitive statistical techniques. During the Solid Rocket Motor (SRM) deorbit phase, a target flight path angle of -4.76 deg at entry interface (EI) offers very high probability of avoiding SRM casing skip-out from the atmosphere. Generally speaking, the baseline designs of the guidance, navigation, and control systems satisfy most of the science and mission requirements.
Utility of an Occupational Therapy Driving Intervention for a Combat Veteran
Monahan, Miriam; Canonizado, Maria; Winter, Sandra
2014-01-01
Many combat veterans are injured in motor vehicle crashes shortly after returning to civilian life, yet little evidence exists on effective driving interventions. In this single-subject design study, we compared clinical test results and driving errors in a returning combat veteran before and after an occupational therapy driving intervention. A certified driving rehabilitation specialist administered baseline clinical and simulated driving assessments; conducted three intervention sessions that discussed driving errors, retrained visual search skills, and invited commentary on driving; and administered a postintervention evaluation in conditions resembling those at baseline. Clinical test results were similar pre- and postintervention. Baseline versus postintervention driving errors were as follows: lane maintenance, 23 versus 7; vehicle positioning, 5 versus 1; signaling, 2 versus 0; speed regulation, 1 versus 1; visual scanning, 1 versus 0; and gap acceptance, 1 versus 0. Although the intervention appeared efficacious for this participant, threats to validity must be recognized and controlled for in a follow-up study. PMID:25005503
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Violette, Daniel M.
Addressing other evaluation issues that have been raised in the context of energy efficiency programs, this chapter focuses on methods used to address the persistence of energy savings, which is an important input to the benefit/cost analysis of energy efficiency programs and portfolios. In addition to discussing 'persistence' (which refers to the stream of benefits over time from an energy efficiency measure or program), this chapter provides a summary treatment of these issues -Synergies across programs -Rebound -Dual baselines -Errors in variables (the measurement and/or accuracy of input variables to the evaluation).
Laughton, Deborah S; Sheppard, Amy L; Davies, Leon N
To investigate non-cycloplegic changes in refractive error prior to the onset of presbyopia. The Aston Longitudinal Assessment of Presbyopia (ALAP) study is a prospective 2.5 year longitudinal study, measuring objective refractive error using a binocular open-field WAM-5500 autorefractor at 6-month intervals in participants aged between 33 and 45 years. From the 58 participants recruited, 51 participants (88%) completed the final visit. At baseline, 21 participants were myopic (MSE -3.25±2.28 DS; baseline age 38.6±3.1 years) and 30 were emmetropic (MSE -0.17±0.32 DS; baseline age 39.0±2.9 years). After 2.5 years, 10% of the myopic group experienced a hypermetropic shift (≥0.50 D), 5% a myopic shift (≥0.50 D) and 85% had no significant change in refraction (<0.50 D). From the emmetropic group, 10% experienced a hypermetropic shift (≥0.50 D), 3% a myopic shift (≥0.50 D) and 87% had no significant change in refraction (<0.50 D). In terms of astigmatism vectors, other than J 45 (p<0.001), all measures remained invariant over the study period. The incidence of a myopic shift in refraction during incipient presbyopia does not appear to be as large as previously indicated by retrospective research. The changes in axis indicate ocular astigmatism tends towards the against-the-rule direction with age. The structural origin(s) of the reported myopic shift in refraction during incipient presbyopia warrants further investigation. Copyright © 2017 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.
Cohen, Samantha C L; Harvey, Danielle J; Shields, Rebecca H; Shields, Grant S; Rashedi, Roxanne N; Tancredi, Daniel J; Angkustsiri, Kathleen; Hansen, Robin L; Schweitzer, Julie B
2018-04-01
Behavioral therapies are first-line for preschoolers with attention-deficit hyperactivity disorder (ADHD). Studies support yoga for school-aged children with ADHD; this study evaluated yoga in preschoolers on parent- and teacher-rated attention/challenging behaviors, attentional control (Kinder Test of Attentional Performance [KiTAP]), and heart rate variability (HRV). This randomized waitlist-controlled trial tested a 6-week yoga intervention in preschoolers with ≥4 ADHD symptoms on the ADHD Rating Scale-IV Preschool Version. Group 1 (n = 12) practiced yoga first; Group 2 (n = 11) practiced yoga second. We collected data at 4 time points: baseline, T1 (6 weeks), T2 (12 weeks), and follow-up (3 months after T2). At baseline, there were no significant differences between groups. At T1, Group 1 had faster reaction times on the KiTAP go/no-go task (p = 0.01, 95% confidence interval [CI], -371.1 to -59.1, d = -1.7), fewer distractibility errors of omission (p = 0.009, 95% CI, -14.2 to -2.3, d = -1.5), and more commission errors (p = 0.02, 95% CI, 1.4-14.8, d = 1.3) than Group 2. Children in Group 1 with more severe symptoms at baseline showed improvement at T1 versus control on parent-rated Strengths and Difficulties Questionnaire hyperactivity inattention (β = -2.1, p = 0.04, 95% CI, -4.0 to -0.1) and inattention on the ADHD Rating Scale (β = -4.4, p = 0.02, 95% CI, -7.9 to -0.9). HRV measures did not differ between groups. Yoga was associated with modest improvements on an objective measure of attention (KiTAP) and selective improvements on parent ratings.
Very long baseline interferometry applied to polar motion, relativity, and geodesy. Ph. D. thesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, C.
1978-01-01
The causes and effects of diurnal polar motion are described. An algorithm was developed for modeling the effects on very long baseline interferometry observables. A selection was made between two three-station networks for monitoring polar motion. The effects of scheduling and the number of sources observed on estimated baseline errors are discussed. New hardware and software techniques in very long baseline interferometry are described.
Peak, Jasmine; Goranitis, Ilias; Day, Ed; Copello, Alex; Freemantle, Nick; Frew, Emma
2018-05-30
Economic evaluation normally requires information to be collected on outcome improvement using utility values. This is often not collected during the treatment of substance use disorders making cost-effectiveness evaluations of therapy difficult. One potential solution is the use of mapping to generate utility values from clinical measures. This study develops and evaluates mapping algorithms that could be used to predict the EuroQol-5D (EQ-5D-5 L) and the ICEpop CAPability measure for Adults (ICECAP-A) from the three commonly used clinical measures; the CORE-OM, the LDQ and the TOP measures. Models were estimated using pilot trial data of heroin users in opiate substitution treatment. In the trial the EQ-5D-5 L, ICECAP-A, CORE-OM, LDQ and TOP were administered at baseline, three and twelve month time intervals. Mapping was conducted using estimation and validation datasets. The normal estimation dataset, which comprised of baseline sample data, used ordinary least squares (OLS) and tobit regression methods. Data from the baseline and three month time periods were combined to create a pooled estimation dataset. Cluster and mixed regression methods were used to map from this dataset. Predictive accuracy of the models was assessed using the root mean square error (RMSE) and the mean absolute error (MAE). Algorithms were validated using sample data from the follow-up time periods. Mapping algorithms can be used to predict the ICECAP-A and the EQ-5D-5 L in the context of opiate dependence. Although both measures can be predicted, the ICECAP-A was better predicted by the clinical measures. There were no advantages of pooling the data. There were 6 chosen mapping algorithms, which had MAE scores ranging from 0.100 to 0.138 and RMSE scores ranging from 0.134 to 0.178. It is possible to predict the scores of the ICECAP-A and the EQ-5D-5 L with the use of mapping. In the context of opiate dependence, these algorithms provide the possibility of generating utility values from clinical measures and thus enabling economic evaluation of alternative therapy options. ISRCTN22608399 . Date of registration: 27/04/2012. Date of first randomisation: 14/08/2012.
Toledo, Eran; Collins, Keith A; Williams, Ursula; Lammertin, Georgeanne; Bolotin, Gil; Raman, Jai; Lang, Roberto M; Mor-Avi, Victor
2005-12-01
Echocardiographic quantification of myocardial perfusion is based on analysis of contrast replenishment after destructive high-energy ultrasound impulses (flash-echo). This technique is limited by nonuniform microbubble destruction and the dependency on exponential fitting of a small number of noisy time points. We hypothesized that brief interruptions of contrast infusion (ICI) would result in uniform contrast clearance followed by slow replenishment and, thus, would allow analysis from multiple data points without exponential fitting. Electrocardiographic-triggered images were acquired in 14 isolated rabbit hearts (Langendorff) at 3 levels of coronary flow (baseline, 50%, and 15%) during contrast infusion (Definity) with flash-echo and with a 20-second infusion interruption. Myocardial videointensity was measured over time from flash-echo sequences, from which characteristic constant beta was calculated using an exponential fit. Peak contrast inflow rate was calculated from ICI data using analysis of local time derivatives. Computer simulations were used to investigate the effects of noise on the accuracy of peak contrast inflow rate and beta calculations. ICI resulted in uniform contrast clearance and baseline replenishment times of 15 to 25 cardiac cycles. Calculated peak contrast inflow rate followed the changes in coronary flow in all hearts at both levels of reduced flow (P < .05) and had a low intermeasurement variability of 7 +/- 6%. With flash-echo, contrast clearance was less uniform and baseline replenishment times were only 4 to 6 cardiac cycles. beta Decreased significantly only at 15% flow, and had intermeasurement variability of 42 +/- 33%. Computer simulations showed that measurement errors in both perfusion indices increased with noise, but beta had larger errors at higher rates of contrast inflow. ICI provides the basis for accurate and reproducible quantification of myocardial perfusion using fast and robust numeric analysis, and may constitute an alternative to the currently used techniques.
Li, Na; Li, Xiu-Ying; Zou, Zhe-Xiang; Lin, Li-Rong; Li, Yao-Qun
2011-07-07
In the present work, a baseline-correction method based on peak-to-derivative baseline measurement was proposed for the elimination of complex matrix interference that was mainly caused by unknown components and/or background in the analysis of derivative spectra. This novel method was applicable particularly when the matrix interfering components showed a broad spectral band, which was common in practical analysis. The derivative baseline was established by connecting two crossing points of the spectral curves obtained with a standard addition method (SAM). The applicability and reliability of the proposed method was demonstrated through both theoretical simulation and practical application. Firstly, Gaussian bands were used to simulate 'interfering' and 'analyte' bands to investigate the effect of different parameters of interfering band on the derivative baseline. This simulation analysis verified that the accuracy of the proposed method was remarkably better than other conventional methods such as peak-to-zero, tangent, and peak-to-peak measurements. Then the above proposed baseline-correction method was applied to the determination of benzo(a)pyrene (BaP) in vegetable oil samples by second-derivative synchronous fluorescence spectroscopy. The satisfactory results were obtained by using this new method to analyze a certified reference material (coconut oil, BCR(®)-458) with a relative error of -3.2% from the certified BaP concentration. Potentially, the proposed method can be applied to various types of derivative spectra in different fields such as UV-visible absorption spectroscopy, fluorescence spectroscopy and infrared spectroscopy.
Residency Training: The King-Devick test and sleep deprivation
Davies, Emma C.; Henderson, Sam; Galetta, Steven L.
2012-01-01
Objective: The current study investigates the effect of sleep deprivation on the speed and accuracy of eye movements as measured by the King-Devick (K-D) test, a <1-minute test that involves rapid number naming. Methods: In this cohort study, neurology residents and staff from the University of Pennsylvania Health System underwent baseline followed by postcall K-D testing (n = 25); those not taking call (n = 10) also completed baseline and follow-up K-D testing. Differences in the times and errors between baseline and follow-up K-D scores were compared between the 2 groups. Results: Residents taking call had less improvement from baseline K-D times when compared to participants not taking call (p < 0.0001, Wilcoxon rank sum test). For both groups, the change in K-D time from baseline was correlated to amount of sleep obtained (rs = −0.50, p = 0.002) and subjective evaluation of level of alertness (rs = 0.33, p = 0.05) but had no correlation to time since last caffeine consumption (rs = −0.13, p = 0.52). For those residents on their actual call night, the duration of sleep obtained did not correlate with change in K-D scores from baseline (rs = 0.13, p = 0.54). Conclusions: The K-D test is sensitive to the effects of sleep deprivation on cognitive functioning, including rapid eye movements, concentration, and language function. As with other measures of sleep deprivation, K-D performance demonstrated significant interindividual variability in vulnerability to sleep deprivation. Severe fatigue appears to reduce the degree of improvement typically observed in K-D testing. PMID:22529208
The RMI Space Weather and Navigation Systems (SWANS) Project
NASA Astrophysics Data System (ADS)
Warnant, Rene; Lejeune, Sandrine; Wautelet, Gilles; Spits, Justine; Stegen, Koen; Stankov, Stan
The SWANS (Space Weather and Navigation Systems) research and development project (http://swans.meteo.be) is an initiative of the Royal Meteorological Institute (RMI) under the auspices of the Belgian Solar-Terrestrial Centre of Excellence (STCE). The RMI SWANS objectives are: research on space weather and its effects on GNSS applications; permanent mon-itoring of the local/regional geomagnetic and ionospheric activity; and development/operation of relevant nowcast, forecast, and alert services to help professional GNSS/GALILEO users in mitigating space weather effects. Several SWANS developments have already been implemented and available for use. The K-LOGIC (Local Operational Geomagnetic Index K Calculation) system is a nowcast system based on a fully automated computer procedure for real-time digital magnetogram data acquisition, data screening, and calculating the local geomagnetic K index. Simultaneously, the planetary Kp index is estimated from solar wind measurements, thus adding to the service reliability and providing forecast capabilities as well. A novel hybrid empirical model, based on these ground-and space-based observations, has been implemented for nowcasting and forecasting the geomagnetic index, issuing also alerts whenever storm-level activity is indicated. A very important feature of the nowcast/forecast system is the strict control on the data input and processing, allowing for an immediate assessment of the output quality. The purpose of the LIEDR (Local Ionospheric Electron Density Reconstruction) system is to acquire and process data from simultaneous ground-based GNSS TEC and digital ionosonde measurements, and subsequently to deduce the vertical electron density distribution. A key module is the real-time estimation of the ionospheric slab thickness, offering additional infor-mation on the local ionospheric dynamics. The RTK (Real Time Kinematic) status mapping provides a quick look at the small-scale ionospheric effects on the RTK precision for several GPS stations in Belgium. The service assesses the effect of small-scale ionospheric irregularities by monitoring the high-frequency TEC rate of change at any given station. This assessment results in a (colour) code assigned to each station, code ranging from "quiet" (green) to "extreme" (red) and referring to the local ionospheric conditions. Alerts via e-mail are sent to subscribed users when disturbed conditions are observed. SoDIPE (Software for Determining the Ionospheric Positioning Error) estimates the position-ing error due to the ionospheric conditions only (called "ionospheric error") in high-precision positioning applications (RTK in particular). For each of the Belgian Active Geodetic Network (AGN) baselines, SoDIPE computes the ionospheric error and its median value (every 15 min-utes). Again, a (colour) code is assigned to each baseline, ranging from "nominal" (green) to "extreme" (red) error level. Finally, all available baselines (drawn in colour corresponding to error level) are displayed on a map of Belgium. The future SWANS work will focus on regional ionospheric monitoring and developing various other nowcast and forecast services.
Comparing Error Correction Procedures for Children Diagnosed with Autism
ERIC Educational Resources Information Center
Townley-Cochran, Donna; Leaf, Justin B.; Leaf, Ronald; Taubman, Mitchell; McEachin, John
2017-01-01
The purpose of this study was to examine the effectiveness of two error correction (EC) procedures: modeling alone and the use of an error statement plus modeling. Utilizing an alternating treatments design nested into a multiple baseline design across participants, we sought to evaluate and compare the effects of these two EC procedures used to…
Sørensen, L B; Damsgaard, C T; Petersen, R A; Dalskov, S-M; Hjorth, M F; Dyssegaard, C B; Egelund, N; Tetens, I; Astrup, A; Lauritzen, L; Michaelsen, K F
2016-10-01
We previously found that the OPUS School Meal Study improved reading and increased errors related to inattention and impulsivity. This study explored whether the cognitive effects differed according to gender, household education and reading proficiency at baseline. This is a cluster-randomised cross-over trial comparing Nordic school meals with packed lunch from home (control) for 3 months each among 834 children aged 8 to 11 years. At baseline and at the end of each dietary period, we assessed children's performance in reading, mathematics and the d2-test of attention. Interactions were evaluated using mixed models. Analyses included 739 children. At baseline, boys and children from households without academic education were poorer readers and had a higher d2-error%. Effects on dietary intake were similar in subgroups. However, the effect of the intervention on test outcomes was stronger in boys, in children from households with academic education and in children with normal/good baseline reading proficiency. Overall, this resulted in increased socioeconomic inequality in reading performance and reduced inequality in impulsivity. Contrary to this, the gender difference decreased in reading and increased in impulsivity. Finally, the gap between poor and normal/good readers was increased in reading and decreased for d2-error%. The effects of healthy school meals on reading, impulsivity and inattention were modified by gender, household education and baseline reading proficiency. The differential effects might be related to environmental aspects of the intervention and deserves to be investigated further in future school meal trials.
Camp, Charles H.; Lee, Young Jong; Cicerone, Marcus T.
2017-01-01
Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download. PMID:28819335
Gleich, Stephen J; Nemergut, Michael E; Stans, Anthony A; Haile, Dawit T; Feigal, Scott A; Heinrich, Angela L; Bosley, Christopher L; Tripathi, Sandeep
2016-08-01
Ineffective and inefficient patient transfer processes can increase the chance of medical errors. Improvements in such processes are high-priority local institutional and national patient safety goals. At our institution, nonintubated postoperative pediatric patients are first admitted to the postanesthesia care unit before transfer to the PICU. This quality improvement project was designed to improve the patient transfer process from the operating room (OR) to the PICU. After direct observation of the baseline process, we introduced a structured, direct OR-PICU transfer process for orthopedic spinal fusion patients. We performed value stream mapping of the process to determine error-prone and inefficient areas. We evaluated primary outcome measures of handoff error reduction and the overall efficiency of patient transfer process time. Staff satisfaction was evaluated as a counterbalance measure. With the introduction of the new direct OR-PICU patient transfer process, the handoff communication error rate improved from 1.9 to 0.3 errors per patient handoff (P = .002). Inefficiency (patient wait time and non-value-creating activity) was reduced from 90 to 32 minutes. Handoff content was improved with fewer information omissions (P < .001). Staff satisfaction significantly improved among nearly all PICU providers. By using quality improvement methodology to design and implement a new direct OR-PICU transfer process with a structured multidisciplinary verbal handoff, we achieved sustained improvements in patient safety and efficiency. Handoff communication was enhanced, with fewer errors and content omissions. The new process improved efficiency, with high staff satisfaction. Copyright © 2016 by the American Academy of Pediatrics.
Using lean to improve medication administration safety: in search of the "perfect dose".
Ching, Joan M; Long, Christina; Williams, Barbara L; Blackmore, C Craig
2013-05-01
At Virginia Mason Medical Center (Seattle), the Collaborative Alliance for Nursing Outcomes (CALNOC) Medication Administration Accuracy Quality Study was used in combination with Lean quality improvement efforts to address medication administration safety. Lean interventions were targeted at improving the medication room layout, applying visual controls, and implementing nursing standard work. The interventions were designed to prevent medication administration errors through improving six safe practices: (1) comparing medication with medication administration record, (2) labeling medication, (3) checking two forms of patient identification, (4) explaining medication to patient, (5) charting medication immediately, and (6) protecting the process from distractions/interruptions. Trained nurse auditors observed 9,244 doses for 2,139 patients. Following the intervention, the number of safe-practice violations decreased from 83 violations/100 doses at baseline (January 2010-March 2010) to 42 violations/100 doses at final follow-up (July 2011-September 2011), resulting in an absolute risk reduction of 42 violations/100 doses (95% confidence interval [CI]: 35-48), p < .001). The number of medication administration errors decreased from 10.3 errors/100 doses at baseline to 2.8 errors/100 doses at final follow-up (absolute risk reduction: 7 violations/100 doses [95% CI: 5-10, p < .001]). The "perfect dose" score, reflecting compliance with all six safe practices and absence of any of the eight medication administration errors, improved from 37 in compliance/100 doses at baseline to 68 in compliance/100 doses at the final follow-up. Lean process improvements coupled with direct observation can contribute to substantial decreases in errors in nursing medication administration.
Boyce, Matthew R; Menya, Diana; Turner, Elizabeth L; Laktabai, Jeremiah; Prudhomme-O'Meara, Wendy
2018-05-18
Malaria rapid diagnostic tests (RDTs) are a simple, point-of-care technology that can improve the diagnosis and subsequent treatment of malaria. They are an increasingly common diagnostic tool, but concerns remain about their use by community health workers (CHWs). These concerns regard the long-term trends relating to infection prevention measures, the interpretation of test results and adherence to treatment protocols. This study assessed whether CHWs maintained their competency at conducting RDTs over a 12-month timeframe, and if this competency varied with specific CHW characteristics. From June to September, 2015, CHWs (n = 271) were trained to conduct RDTs using a 3-day validated curriculum and a baseline assessment was completed. Between June and August, 2016, CHWs (n = 105) were randomly selected and recruited for follow-up assessments using a 20-step checklist that classified steps as relating to safety, accuracy, and treatment; 103 CHWs participated in follow-up assessments. Poisson regressions were used to test for associations between error count data at follow-up and Poisson regression models fit using generalized estimating equations were used to compare data across time-points. At both baseline and follow-up observations, at least 80% of CHWs correctly completed 17 of the 20 steps. CHWs being 50 years of age or older was associated with increased total errors and safety errors at baseline and follow-up. At follow-up, prior experience conducting RDTs was associated with fewer errors. Performance, as it related to the correct completion of all checklist steps and safety steps, did not decline over the 12 months and performance of accuracy steps improved (mean error ratio: 0.51; 95% CI 0.40-0.63). Visual interpretation of RDT results yielded a CHW sensitivity of 92.0% and a specificity of 97.3% when compared to interpretation by the research team. None of the characteristics investigated was found to be significantly associated with RDT interpretation. With training, most CHWs performing RDTs maintain diagnostic testing competency over at least 12 months. CHWs generally perform RDTs safely and accurately interpret results. Younger age and prior experiences with RDTs were associated with better testing performance. Future research should investigate the mode by which CHW characteristics impact RDT procedures.
Stanford, Tyman E; Bagley, Christopher J; Solomon, Patty J
2016-01-01
Proteomic matrix-assisted laser desorption/ionisation (MALDI) linear time-of-flight (TOF) mass spectrometry (MS) may be used to produce protein profiles from biological samples with the aim of discovering biomarkers for disease. However, the raw protein profiles suffer from several sources of bias or systematic variation which need to be removed via pre-processing before meaningful downstream analysis of the data can be undertaken. Baseline subtraction, an early pre-processing step that removes the non-peptide signal from the spectra, is complicated by the following: (i) each spectrum has, on average, wider peaks for peptides with higher mass-to-charge ratios ( m / z ), and (ii) the time-consuming and error-prone trial-and-error process for optimising the baseline subtraction input arguments. With reference to the aforementioned complications, we present an automated pipeline that includes (i) a novel 'continuous' line segment algorithm that efficiently operates over data with a transformed m / z -axis to remove the relationship between peptide mass and peak width, and (ii) an input-free algorithm to estimate peak widths on the transformed m / z scale. The automated baseline subtraction method was deployed on six publicly available proteomic MS datasets using six different m/z-axis transformations. Optimality of the automated baseline subtraction pipeline was assessed quantitatively using the mean absolute scaled error (MASE) when compared to a gold-standard baseline subtracted signal. Several of the transformations investigated were able to reduce, if not entirely remove, the peak width and peak location relationship resulting in near-optimal baseline subtraction using the automated pipeline. The proposed novel 'continuous' line segment algorithm is shown to far outperform naive sliding window algorithms with regard to the computational time required. The improvement in computational time was at least four-fold on real MALDI TOF-MS data and at least an order of magnitude on many simulated datasets. The advantages of the proposed pipeline include informed and data specific input arguments for baseline subtraction methods, the avoidance of time-intensive and subjective piecewise baseline subtraction, and the ability to automate baseline subtraction completely. Moreover, individual steps can be adopted as stand-alone routines.
Individual differences in normal body temperature: longitudinal big data analysis of patient records
Samra, Jasmeet K; Mullainathan, Sendhil
2017-01-01
Abstract Objective To estimate individual level body temperature and to correlate it with other measures of physiology and health. Design Observational cohort study. Setting Outpatient clinics of a large academic hospital, 2009-14. Participants 35 488 patients who neither received a diagnosis for infections nor were prescribed antibiotics, in whom temperature was expected to be within normal limits. Main outcome measures Baseline temperatures at individual level, estimated using random effects regression and controlling for ambient conditions at the time of measurement, body site, and time factors. Baseline temperatures were correlated with demographics, medical comorbidities, vital signs, and subsequent one year mortality. Results In a diverse cohort of 35 488 patients (mean age 52.9 years, 64% women, 41% non-white race) with 243 506 temperature measurements, mean temperature was 36.6°C (95% range 35.7-37.3°C, 99% range 35.3-37.7°C). Several demographic factors were linked to individual level temperature, with older people the coolest (–0.021°C for every decade, P<0.001) and African-American women the hottest (versus white men: 0.052°C, P<0.001). Several comorbidities were linked to lower temperature (eg, hypothyroidism: –0.013°C, P=0.01) or higher temperature (eg, cancer: 0.020, P<0.001), as were physiological measurements (eg, body mass index: 0.002 per m/kg2, P<0.001). Overall, measured factors collectively explained only 8.2% of individual temperature variation. Despite this, unexplained temperature variation was a significant predictor of subsequent mortality: controlling for all measured factors, an increase of 0.149°C (1 SD of individual temperature in the data) was linked to 8.4% higher one year mortality (P=0.014). Conclusions Individuals’ baseline temperatures showed meaningful variation that was not due solely to measurement error or environmental factors. Baseline temperatures correlated with demographics, comorbid conditions, and physiology, but these factors explained only a small part of individual temperature variation. Unexplained variation in baseline temperature, however, strongly predicted mortality. PMID:29237616
Kontodimopoulos, Nick; Bozios, Panagiotis; Yfantopoulos, John; Niakas, Dimitris
2013-04-01
The purpose of this methodological study was to to provide insight into the under-addressed issue of the longitudinal predictive ability of mapping models. Post-intervention predicted and reported utilities were compared, and the effect of disease severity on the observed differences was examined. A cohort of 120 rheumatoid arthritis (RA) patients (60.0% female, mean age 59.0) embarking on therapy with biological agents completed the Modified Health Assessment Questionnaire (MHAQ) and the EQ-5D at baseline, and at 3, 6 and 12 months post-intervention. OLS regression produced a mapping equation to estimate post-intervention EQ-5D utilities from baseline MHAQ data. Predicted and reported utilities were compared with t test, and the prediction error was modeled, using fixed effects, in terms of covariates such as age, gender, time, disease duration, treatment, RF, DAS28 score, predicted and reported EQ-5D. The OLS model (RMSE = 0.207, R(2) = 45.2%) consistently underestimated future utilities, with a mean prediction error of 6.5%. Mean absolute differences between reported and predicted EQ-5D utilities at 3, 6 and 12 months exceeded the typically reported MID of the EQ-5D (0.03). According to the fixed-effects model, time, lower predicted EQ-5D and higher DAS28 scores had a significant impact on prediction errors, which appeared increasingly negative for lower reported EQ-5D scores, i.e., predicted utilities tended to be lower than reported ones in more severe health states. This study builds upon existing research having demonstrated the potential usefulness of mapping disease-specific instruments onto utility measures. The specific issue of longitudinal validity is addressed, as mapping models derived from baseline patients need to be validated on post-therapy samples. The underestimation of post-treatment utilities in the present study, at least in more severe patients, warrants further research before it is prudent to conduct cost-utility analyses in the context of RA by means of the MHAQ alone.
Cook, David A; Dupras, Denise M; Beckman, Thomas J; Thomas, Kris G; Pankratz, V Shane
2009-01-01
Mini-CEX scores assess resident competence. Rater training might improve mini-CEX score interrater reliability, but evidence is lacking. Evaluate a rater training workshop using interrater reliability and accuracy. Randomized trial (immediate versus delayed workshop) and single-group pre/post study (randomized groups combined). Academic medical center. Fifty-two internal medicine clinic preceptors (31 randomized and 21 additional workshop attendees). The workshop included rater error training, performance dimension training, behavioral observation training, and frame of reference training using lecture, video, and facilitated discussion. Delayed group received no intervention until after posttest. Mini-CEX ratings at baseline (just before workshop for workshop group), and four weeks later using videotaped resident-patient encounters; mini-CEX ratings of live resident-patient encounters one year preceding and one year following the workshop; rater confidence using mini-CEX. Among 31 randomized participants, interrater reliabilities in the delayed group (baseline intraclass correlation coefficient [ICC] 0.43, follow-up 0.53) and workshop group (baseline 0.40, follow-up 0.43) were not significantly different (p = 0.19). Mean ratings were similar at baseline (delayed 4.9 [95% confidence interval 4.6-5.2], workshop 4.8 [4.5-5.1]) and follow-up (delayed 5.4 [5.0-5.7], workshop 5.3 [5.0-5.6]; p = 0.88 for interaction). For the entire cohort, rater confidence (1 = not confident, 6 = very confident) improved from mean (SD) 3.8 (1.4) to 4.4 (1.0), p = 0.018. Interrater reliability for ratings of live encounters (entire cohort) was higher after the workshop (ICC 0.34) than before (ICC 0.18) but the standard error of measurement was similar for both periods. Rater training did not improve interrater reliability or accuracy of mini-CEX scores. clinicaltrials.gov identifier NCT00667940
ERIC Educational Resources Information Center
Leon, Yanerys; Wilder, David A.; Majdalany, Lina; Myers, Kristin; Saini, Valdeep
2014-01-01
We conducted two experiments to evaluate the effects of errors of omission and commission during alternative reinforcement of compliance in young children. In Experiment 1, we evaluated errors of omission by examining two levels of integrity during alternative reinforcement (20 and 60%) for child compliance following no treatment (baseline) versus…
Adverse Effects in Dual-Star Interferometry
NASA Technical Reports Server (NTRS)
Colavita, M. Mark
2008-01-01
Narrow-angle dual-star interferometric astrometry can provide very high accuracy in the presence of the Earth's turbulent atmosphere. However, to exploit the high atmospherically-limited accuracy requires control of systematic errors in measurement of the interferometer baseline, internal OPDs, and fringe phase. In addition, as high photometric SNR is required, care must be taken to maximize throughput and coherence to obtain high accuracy on faint stars. This article reviews: the keys aspects of the dual-star approach and implementation; the main contributors to the
Yurko, Yuliya Y; Scerbo, Mark W; Prabhu, Ajita S; Acker, Christina E; Stefanidis, Dimitrios
2010-10-01
Increased workload during task performance may increase fatigue and facilitate errors. The National Aeronautics and Space Administration-Task Load Index (NASA-TLX) is a previously validated tool for workload self-assessment. We assessed the relationship of workload and performance during simulator training on a complex laparoscopic task. NASA-TLX workload data from three separate trials were analyzed. All participants were novices (n = 28), followed the same curriculum on the fundamentals of laparoscopic surgery suturing model, and were tested in the animal operating room (OR) on a Nissen fundoplication model after training. Performance and workload scores were recorded at baseline, after proficiency achievement, and during the test. Performance, NASA-TLX scores, and inadvertent injuries during the test were analyzed and compared. Workload scores declined during training and mirrored performance changes. NASA-TLX scores correlated significantly with performance scores (r = -0.5, P < 0.001). Participants with higher workload scores caused more inadvertent injuries to adjacent structures in the OR (r = 0.38, P < 0.05). Increased mental and physical workload scores at baseline correlated with higher workload scores in the OR (r = 0.52-0.82; P < 0.05) and more inadvertent injuries (r = 0.52, P < 0.01). Increased workload is associated with inferior task performance and higher likelihood of errors. The NASA-TLX questionnaire accurately reflects workload changes during simulator training and may identify individuals more likely to experience high workload and more prone to errors during skill transfer to the clinical environment.
NASA Astrophysics Data System (ADS)
Li, Zhiyong; Hoagg, Jesse B.; Martin, Alexandre; Bailey, Sean C. C.
2018-03-01
This paper presents a data-driven computational model for simulating unsteady turbulent flows, where sparse measurement data is available. The model uses the retrospective cost adaptation (RCA) algorithm to automatically adjust the closure coefficients of the Reynolds-averaged Navier-Stokes (RANS) k- ω turbulence equations to improve agreement between the simulated flow and the measurements. The RCA-RANS k- ω model is verified for steady flow using a pipe-flow test case and for unsteady flow using a surface-mounted-cube test case. Measurements used for adaptation of the verification cases are obtained from baseline simulations with known closure coefficients. These verification test cases demonstrate that the RCA-RANS k- ω model can successfully adapt the closure coefficients to improve agreement between the simulated flow field and a set of sparse flow-field measurements. Furthermore, the RCA-RANS k- ω model improves agreement between the simulated flow and the baseline flow at locations at which measurements do not exist. The RCA-RANS k- ω model is also validated with experimental data from 2 test cases: steady pipe flow, and unsteady flow past a square cylinder. In both test cases, the adaptation improves agreement with experimental data in comparison to the results from a non-adaptive RANS k- ω model that uses the standard values of the k- ω closure coefficients. For the steady pipe flow, adaptation is driven by mean stream-wise velocity measurements at 24 locations along the pipe radius. The RCA-RANS k- ω model reduces the average velocity error at these locations by over 35%. For the unsteady flow over a square cylinder, adaptation is driven by time-varying surface pressure measurements at 2 locations on the square cylinder. The RCA-RANS k- ω model reduces the average surface-pressure error at these locations by 88.8%.
Performance analysis of an integrated GPS/inertial attitude determination system. M.S. Thesis - MIT
NASA Technical Reports Server (NTRS)
Sullivan, Wendy I.
1994-01-01
The performance of an integrated GPS/inertial attitude determination system is investigated using a linear covariance analysis. The principles of GPS interferometry are reviewed, and the major error sources of both interferometers and gyroscopes are discussed and modeled. A new figure of merit, attitude dilution of precision (ADOP), is defined for two possible GPS attitude determination methods, namely single difference and double difference interferometry. Based on this figure of merit, a satellite selection scheme is proposed. The performance of the integrated GPS/inertial attitude determination system is determined using a linear covariance analysis. Based on this analysis, it is concluded that the baseline errors (i.e., knowledge of the GPS interferometer baseline relative to the vehicle coordinate system) are the limiting factor in system performance. By reducing baseline errors, it should be possible to use lower quality gyroscopes without significantly reducing performance. For the cases considered, single difference interferometry is only marginally better than double difference interferometry. Finally, the performance of the system is found to be relatively insensitive to the satellite selection technique.
Sherman, V; Feldman, L S; Stanbridge, D; Kazmi, R; Fried, G M
2005-05-01
The aim of this study was to develop summary metrics and assess the construct validity for a virtual reality laparoscopic simulator (LapSim) by comparing the learning curves of three groups with different levels of laparoscopic expertise. Three groups of subjects ('expert', 'junior', and 'naïve') underwent repeated trials on three LapSim tasks. Formulas were developed to calculate scores for efficiency ('time-error') and economy of 'motion' ('motion') using metrics generated by the software after each drill. Data (mean +/- SD) were evaluated by analysis of variance (ANOVA). Significance was set at p < 0.05. All three groups improved significantly from baseline to final for both 'time-error' and 'motion' scores. There were significant differences between groups in time error performances at baseline and final, due to higher scores in the 'expert' group. A significant difference in 'motion' scores was seen only at baseline. We have developed summary metrics for the LapSim that differentiate among levels of laparoscopic experience. This study also provides evidence of construct validity for the LapSim.
Gençay, R; Qi, M
2001-01-01
We study the effectiveness of cross validation, Bayesian regularization, early stopping, and bagging to mitigate overfitting and improving generalization for pricing and hedging derivative securities with daily S&P 500 index daily call options from January 1988 to December 1993. Our results indicate that Bayesian regularization can generate significantly smaller pricing and delta-hedging errors than the baseline neural-network (NN) model and the Black-Scholes model for some years. While early stopping does not affect the pricing errors, it significantly reduces the hedging error (HE) in four of the six years we investigated. Although computationally most demanding, bagging seems to provide the most accurate pricing and delta hedging. Furthermore, the standard deviation of the MSPE of bagging is far less than that of the baseline model in all six years, and the standard deviation of the average HE of bagging is far less than that of the baseline model in five out of six years. We conclude that they be used at least in cases when no appropriate hints are available.
Does Exercise Improve Cognitive Performance? A Conservative Message from Lord's Paradox.
Liu, Sicong; Lebeau, Jean-Charles; Tenenbaum, Gershon
2016-01-01
Although extant meta-analyses support the notion that exercise results in cognitive performance enhancement, methodology shortcomings are noted among primary evidence. The present study examined relevant randomized controlled trials (RCTs) published in the past 20 years (1996-2015) for methodological concerns arise from Lord's paradox. Our analysis revealed that RCTs supporting the positive effect of exercise on cognition are likely to include Type I Error(s). This result can be attributed to the use of gain score analysis on pretest-posttest data as well as the presence of control group superiority over the exercise group on baseline cognitive measures. To improve accuracy of causal inferences in this area, analysis of covariance on pretest-posttest data is recommended under the assumption of group equivalence. Important experimental procedures are discussed to maintain group equivalence.
NASA Astrophysics Data System (ADS)
Werner, C. L.; Wegmüller, U.; Strozzi, T.
2012-12-01
The Lost-Hills oil field located in Kern County,California ranks sixth in total remaining reserves in California. Hundreds of densely packed wells characterize the field with one well every 5000 to 20000 square meters. Subsidence due to oil extraction can be grater than 10 cm/year and is highly variable both in space and time. The RADARSAT-1 SAR satellite collected data over this area with a 24-day repeat during a 2 year period spanning 2002-2004. Relatively high interferometric correlation makes this an excellent region for development and test of deformation time-series inversion algorithms. Errors in deformation time series derived from a stack of differential interferograms are primarily due to errors in the digital terrain model, interferometric baselines, variability in tropospheric delay, thermal noise and phase unwrapping errors. Particularly challenging is separation of non-linear deformation from variations in troposphere delay and phase unwrapping errors. In our algorithm a subset of interferometric pairs is selected from a set of N radar acquisitions based on criteria of connectivity, time interval, and perpendicular baseline. When possible, the subset consists of temporally connected interferograms, otherwise the different groups of interferograms are selected to overlap in time. The maximum time interval is constrained to be less than a threshold value to minimize phase gradients due to deformation as well as minimize temporal decorrelation. Large baselines are also avoided to minimize the consequence of DEM errors on the interferometric phase. Based on an extension of the SVD based inversion described by Lee et al. ( USGS Professional Paper 1769), Schmidt and Burgmann (JGR, 2003), and the earlier work of Berardino (TGRS, 2002), our algorithm combines estimation of the DEM height error with a set of finite difference smoothing constraints. A set of linear equations are formulated for each spatial point that are functions of the deformation velocities during the time intervals spanned by the interferogram and a DEM height correction. The sensitivity of the phase to the height correction depends on the length of the perpendicular baseline of each interferogram. This design matrix is augmented with a set of additional weighted constraints on the acceleration that penalize rapid velocity variations. The weighting factor γ can be varied from 0 (no smoothing) to a large values (> 10) that yield an essentially linear time-series solution. The factor can be tuned to take into account a priori knowledge of the deformation non-linearity. The difference between the time-series solution and the unconstrained time-series can be interpreted as due to a combination of tropospheric path delay and baseline error. Spatial smoothing of the residual phase leads to an improved atmospheric model that can be fed back into the model and iterated. Our analysis shows non-linear deformation related to changes in the oil extraction as well as local height corrections improving on the low resolution 3 arc-sec SRTM DEM.
Obermeyer, Ziad; Samra, Jasmeet K; Mullainathan, Sendhil
2017-12-13
To estimate individual level body temperature and to correlate it with other measures of physiology and health. Observational cohort study. Outpatient clinics of a large academic hospital, 2009-14. 35 488 patients who neither received a diagnosis for infections nor were prescribed antibiotics, in whom temperature was expected to be within normal limits. Baseline temperatures at individual level, estimated using random effects regression and controlling for ambient conditions at the time of measurement, body site, and time factors. Baseline temperatures were correlated with demographics, medical comorbidities, vital signs, and subsequent one year mortality. In a diverse cohort of 35 488 patients (mean age 52.9 years, 64% women, 41% non-white race) with 243 506 temperature measurements, mean temperature was 36.6°C (95% range 35.7-37.3°C, 99% range 35.3-37.7°C). Several demographic factors were linked to individual level temperature, with older people the coolest (-0.021°C for every decade, P<0.001) and African-American women the hottest (versus white men: 0.052°C, P<0.001). Several comorbidities were linked to lower temperature (eg, hypothyroidism: -0.013°C, P=0.01) or higher temperature (eg, cancer: 0.020, P<0.001), as were physiological measurements (eg, body mass index: 0.002 per m/kg 2 , P<0.001). Overall, measured factors collectively explained only 8.2% of individual temperature variation. Despite this, unexplained temperature variation was a significant predictor of subsequent mortality: controlling for all measured factors, an increase of 0.149°C (1 SD of individual temperature in the data) was linked to 8.4% higher one year mortality (P=0.014). Individuals' baseline temperatures showed meaningful variation that was not due solely to measurement error or environmental factors. Baseline temperatures correlated with demographics, comorbid conditions, and physiology, but these factors explained only a small part of individual temperature variation. Unexplained variation in baseline temperature, however, strongly predicted mortality. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Stone, Nicole; Graham, Cynthia; Anstee, Sydney; Brown, Katherine; Newby, Katie; Ingham, Roger
2018-01-01
Condoms remain the main protection against sexually transmitted infections (STIs) when used correctly and consistently. Yet, there are many reported barriers to their use such as negative attitudes, reduced sexual pleasure, fit-and-feel problems and erection difficulties. The UK home-based intervention strategy (HIS-UK) is a behaviour change condom promotion intervention for use among young men (aged 16-25 years) designed to increase condom use by enhancing enjoyment of condom-protected intercourse. The objective of this feasibility study was to test HIS-UK for viability, operability and acceptability. Along with an assessment of the recruitment strategy and adherence to the intervention protocol, the study tested the reliability and suitability of a series of behavioural and condom use outcome measures to assess condom use attitudes, motivations, self-efficacy, use experience, errors and problems and fit and feel. The HIS-UK intervention and associated assessment instruments were tested for feasibility using a single-arm, repeated measures design with baseline measurement and two follow-up measurements over 3 months. A 3-month target of 50 young men completing the baseline questionnaire was set. Twenty process and acceptability evaluation interviews with participants and health promotion professionals were conducted post trial. Of the 61 young men who registered for the study, 57 completed the baseline questionnaire and 33 met with the study researcher to receive the HIS-UK condom kit. Twenty-one young men remained for the duration of the study (64% retention). The Cronbach's alpha scores for the condom use outcome measures were 0.84 attitudes, 0.78 self-efficacy, 0.83 use experience, 0.69 errors and problems and 0.75 fit and feel. Participant and health professional feedback indicated strong acceptability of the intervention. The feasibility study demonstrated that our recruitment strategy was appropriate and the target sample size was achieved. Adherence was favourable when compared to other similar studies. The condom use measures tested proved to be fit-for-purpose with good internal consistency. Some further development and subsequent piloting of HIS-UK is required prior to a full randomised controlled trial, including the feasibility of collecting STI biomarkers, and assessment of participant acceptance of randomisation. Research registry, RR2315, 27th March 2017 (retrospectively registered).
The US Navy Coastal Surge and Inundation Prediction System (CSIPS): Making Forecasts Easier
2013-02-14
produced the best results Peak Water Level Percent Error CD Formulation LAWMA , Amerada Pass Freshwater Canal Locks Calcasieu Pass Sabine Pass...Conclusions Ongoing Work 16 Baseline Simulation Results Peak Water Level Percent Error LAWMA , Amerada Pass Freshwater Canal Locks Calcasieu Pass...Conclusions Ongoing Work 20 Sensitivity Studies Waves Run Water Level – Percent Error of Peak HWM MAPE Lawma , Armeda Pass Freshwater
Davies, Emma C; Henderson, Sam; Balcer, Laura J; Galetta, Steven L
2012-04-24
The current study investigates the effect of sleep deprivation on the speed and accuracy of eye movements as measured by the King-Devick (K-D) test, a <1-minute test that involves rapid number naming. In this cohort study, neurology residents and staff from the University of Pennsylvania Health System underwent baseline followed by postcall K-D testing (n = 25); those not taking call (n = 10) also completed baseline and follow-up K-D testing. Differences in the times and errors between baseline and follow-up K-D scores were compared between the 2 groups. Residents taking call had less improvement from baseline K-D times when compared to participants not taking call (p < 0.0001, Wilcoxon rank sum test). For both groups, the change in K-D time from baseline was correlated to amount of sleep obtained (r(s) = -0.50, p = 0.002) and subjective evaluation of level of alertness (r(s) = 0.33, p = 0.05) but had no correlation to time since last caffeine consumption (r(s) = -0.13, p = 0.52). For those residents on their actual call night, the duration of sleep obtained did not correlate with change in K-D scores from baseline (r(s) = 0.13, p = 0.54). The K-D test is sensitive to the effects of sleep deprivation on cognitive functioning, including rapid eye movements, concentration, and language function. As with other measures of sleep deprivation, K-D performance demonstrated significant interindividual variability in vulnerability to sleep deprivation. Severe fatigue appears to reduce the degree of improvement typically observed in K-D testing.
Tilgner, Linda; Wertheim, Eleanor H; Paxton, Susan J
2004-03-01
The current study examined whether a social desirability response bias is a source of measurement error in prevention research. Six hundred and seventy-seven female students in Grade 7 (n = 345) and Grade 8 (n = 332) were divided into either an intervention condition, in which participants watched a videotape promoting body acceptance and discouraging dieting and then discussed issues related to the video, or a control condition. Questionnaires were completed at baseline, postintervention, and at 1-month follow-up. Social desirability scores were correlated at a low but significant level with baseline body dissatisfaction, drive for thinness, bulimic tendencies, intention to diet, and size discrepancy for intervention participants. Social desirability did not correlate significantly with change over time in the outcome measures. The findings suggested that changes in girls' self-reports related to a prevention program were relatively free of social desirability response bias. Copyright 2004 by Wiley Periodicals, Inc. Int J Eat Disord 35: 211-216, 2004.
Zucker, Jason; Mittal, Jaimie; Jen, Shin-Pung; Cheng, Lucy; Cennimo, David
2016-03-01
There is a high prevalence of HIV infection in Newark, New Jersey, with University Hospital admitting approximately 600 HIV-infected patients per year. Medication errors involving antiretroviral therapy (ART) could significantly affect treatment outcomes. The goal of this study was to evaluate the effectiveness of various stewardship interventions in reducing the prevalence of prescribing errors involving ART. This was a retrospective review of all inpatients receiving ART for HIV treatment during three distinct 6-month intervals over a 3-year period. During the first year, the baseline prevalence of medication errors was determined. During the second year, physician and pharmacist education was provided, and a computerized order entry system with drug information resources and prescribing recommendations was implemented. Prospective audit of ART orders with feedback was conducted in the third year. Analyses and comparisons were made across the three phases of this study. Of the 334 patients with HIV admitted in the first year, 45% had at least one antiretroviral medication error and 38% had uncorrected errors at the time of discharge. After education and computerized order entry, significant reductions in medication error rates were observed compared to baseline rates; 36% of 315 admissions had at least one error and 31% had uncorrected errors at discharge. While the prevalence of antiretroviral errors in year 3 was similar to that of year 2 (37% of 276 admissions), there was a significant decrease in the prevalence of uncorrected errors at discharge (12%) with the use of prospective review and intervention. Interventions, such as education and guideline development, can aid in reducing ART medication errors, but a committed stewardship program is necessary to elicit the greatest impact. © 2016 Pharmacotherapy Publications, Inc.
NASA Astrophysics Data System (ADS)
Choi, J. H.; Kim, S. W.; Won, J. S.
2017-12-01
The objective of this study is monitoring and evaluating the stability of buildings in Seoul, Korea. This study includes both algorithm development and application to a case study. The development focuses on improving the PSI approach for discriminating various geophysical phase components and separating them from the target displacement phase. A thermal expansion is one of the key components that make it difficult for precise displacement measurement. The core idea is to optimize the thermal expansion factor using air temperature data and to model the corresponding phase by fitting the residual phase. We used TerraSAR-X SAR data acquired over two years from 2011 to 2013 in Seoul, Korea. The temperature fluctuation according to seasons is considerably high in Seoul, Korea. Other problem is the highly-developed skyscrapers in Seoul, which seriously contribute to DEM errors. To avoid a high computational burden and unstable solution of the nonlinear equation due to unknown parameters (a thermal expansion parameter as well as two conventional parameters: linear velocity and DEM errors), we separate a phase model into two main steps as follows. First, multi-baseline pairs with very short time interval in which deformation components and thermal expansion can be negligible were used to estimate DEM errors first. Second, single-baseline pairs were used to estimate two remaining parameters, linear deformation rate and thermal expansion. The thermal expansion of buildings closely correlate with the seasonal temperature fluctuation. Figure 1 shows deformation patterns of two selected buildings in Seoul. In the figures of left column (Figure 1), it is difficult to observe the true ground subsidence due to a large cyclic pattern caused by thermal dilation of the buildings. The thermal dilation often mis-leads the results into wrong conclusions. After the correction by the proposed method, true ground subsidence was able to be precisely measured as in the bottom right figure in Figure 1. The results demonstrate how the thermal expansion phase blinds the time-series measurement of ground motion and how well the proposed approach able to remove the noise phases caused by thermal expansion and DEM errors. Some of the detected displacements matched well with the pre-reported events, such as ground subsidence and sinkhole.
Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; ...
2014-06-03
A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the priormore » information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a laptop or desktop PC.« less
Flint, Lorraine E.; Flint, Alan L.
2012-01-01
Stream temperature estimates under future climatic conditions were needed in support of fish production modeling for evaluation of effects of dam removal in the Klamath River Basin. To allow for the persistence of the Klamath River salmon fishery, an upcoming Secretarial Determination in 2012 will review potential changes in water quality and stream temperature to assess alternative scenarios, including dam removal. Daily stream temperature models were developed by using a regression model approach with simulated net solar radiation, vapor density deficit calculated on the basis of air temperature, and mean daily air temperature. Models were calibrated for 6 streams in the Lower, and 18 streams in the Upper, Klamath Basin by using measured stream temperatures for 1999-2008. The standard error of the y-estimate for the estimation of stream temperature for the 24 streams ranged from 0.36 to 1.64°C, with an average error of 1.12°C for all streams. The regression models were then used with projected air temperatures to estimate future stream temperatures for 2010-99. Although the mean change from the baseline historical period of 1950-99 to the projected future period of 2070-99 is only 1.2°C, it ranges from 3.4°C for the Shasta River to no change for Fall Creek and Trout Creek. Variability is also evident in the future with a mean change in temperature for all streams from the baseline period to the projected period of 2070-99 of only 1°C, while the range in stream temperature change is from 0 to 2.1°C. The baseline period, 1950-99, to which the air temperature projections were corrected, established the starting point for the projected changes in air temperature. The average measured daily air temperature for the calibration period 1999-2008, however, was found to be as much as 2.3°C higher than baseline for some rivers, indicating that warming conditions have already occurred in many areas of the Klamath River Basin, and that the stream temperature projections for the 21st century could be underestimating the actual change.
Investigation of Space Interferometer Control Using Imaging Sensor Output Feedback
NASA Technical Reports Server (NTRS)
Leitner, Jesse A.; Cheng, Victor H. L.
2003-01-01
Numerous space interferometry missions are planned for the next decade to verify different enabling technologies towards very-long-baseline interferometry to achieve high-resolution imaging and high-precision measurements. These objectives will require coordinated formations of spacecraft separately carrying optical elements comprising the interferometer. High-precision sensing and control of the spacecraft and the interferometer-component payloads are necessary to deliver sub-wavelength accuracy to achieve the scientific objectives. For these missions, the primary scientific product of interferometer measurements may be the only source of data available at the precision required to maintain the spacecraft and interferometer-component formation. A concept is studied for detecting the interferometer's optical configuration errors based on information extracted from the interferometer sensor output. It enables precision control of the optical components, and, in cases of space interferometers requiring formation flight of spacecraft that comprise the elements of a distributed instrument, it enables the control of the formation-flying vehicles because independent navigation or ranging sensors cannot deliver the high-precision metrology over the entire required geometry. Since the concept can act on the quality of the interferometer output directly, it can detect errors outside the capability of traditional metrology instruments, and provide the means needed to augment the traditional instrumentation to enable enhanced performance. Specific analyses performed in this study include the application of signal-processing and image-processing techniques to solve the problems of interferometer aperture baseline control, interferometer pointing, and orientation of multiple interferometer aperture pairs.
Radiographic absorptiometry method in measurement of localized alveolar bone density changes.
Kuhl, E D; Nummikoski, P V
2000-03-01
The objective of this study was to measure the accuracy and precision of a radiographic absorptiometry method by using an occlusal density reference wedge in quantification of localized alveolar bone density changes. Twenty-two volunteer subjects had baseline and follow-up radiographs taken of mandibular premolar-molar regions with an occlusal density reference wedge in both films and added bone chips in the baseline films. The absolute bone equivalent densities were calculated in the areas that contained bone chips from the baseline and follow-up radiographs. The differences in densities described the masses of the added bone chips that were then compared with the true masses by using regression analysis. The correlation between the estimated and true bone-chip masses ranged from R = 0.82 to 0.94, depending on the background bone density. There was an average 22% overestimation of the mass of the bone chips when they were in low-density background, and up to 69% overestimation when in high-density background. The precision error of the method, which was calculated from duplicate bone density measurements of non-changing areas in both films, was 4.5%. The accuracy of the intraoral radiographic absorptiometry method is low when used for absolute quantification of bone density. However, the precision of the method is good and the correlation is linear, indicating that the method can be used for serial assessment of bone density changes at individual sites.
NASA Technical Reports Server (NTRS)
Lu, Hui-Ling; Cheng, H. L.; Lyon, Richard G.; Carpenter, Kenneth G.
2007-01-01
The long-baseline space interferometer concept involving formation flying of multiple spacecraft holds great promise as future space missions for high-resolution imagery. A major challenge of obtaining high-quality interferometric synthesized images from long-baseline space interferometers is to accurately control these spacecraft and their optics payloads in the specified configuration. Our research focuses on the determination of the optical errors to achieve fine control of long-baseline space interferometers without resorting to additional sensing equipment. We present a suite of estimation tools that can effectively extract from the raw interferometric image relative x/y, piston translational and tip/tilt deviations at the exit pupil aperture. The use of these error estimates in achieving control of the interferometer elements is demonstrated using simulated as well as laboratory-collected interferometric stellar images.
NASA Technical Reports Server (NTRS)
Lu, Hui-Ling; Cheng, Victor H. L.; Lyon, Richard G.; Carpenter, Kenneth G.
2007-01-01
The long-baseline space interferometer concept involving formation flying of multiple spacecrafts holds great promise as future space missions for high-resolution imagery. A major challenge of obtaining high-quality interferometric synthesized images from long-baseline space interferometers is to accurately control these spacecraft and their optics payloads in the specified configuration. Our research focuses on the determination of the optical errors to achieve fine control of long-baseline space interferometers without resorting to additional sensing equipment. We present a suite of estimation tools that can effectively extract from the raw interferometric image relative x/y, piston translational and tip/tilt deviations at the exit pupil aperture. The use of these error estimates in achieving control of the interferometer elements is demonstrated using simulated as well as laboratory-collected interferometric stellar images.
NASA Astrophysics Data System (ADS)
Bonforte, A.; Casu, F.; de Martino, P.; Guglielmino, F.; Lanari, R.; Manzo, M.; Obrizzo, F.; Puglisi, G.; Sansosti, E.; Tammaro, U.
2009-04-01
Differential Synthetic Aperture Radar Interferometry (DInSAR) is a methodology able to measure ground deformation rates and time series of relatively large areas. Several different approaches have been developed over the past few years: they all have in common the capability to measure deformations on a relatively wide area (say 100 km by 100 km) with a high density of the measuring points. For these reasons, DInSAR represents a very useful tool for investigating geophysical phenomena, with particular reference to volcanic areas. As for any measuring technique, the knowledge of the attainable accuracy is of fundamental importance. In the case of DInSAR technology, we have several error sources, such as orbital inaccuracies, phase unwrapping errors, atmospheric artifacts, effects related to the reference point selection, thus making very difficult to define a theoretical error model. A practical way to obtain assess the accuracy is to compare DInSAR results with independent measurements, such as GPS or levelling. Here we present an in-deep comparison between the deformation measurement obtained by exploiting the DInSAR technique referred to as Small BAseline Subset (SBAS) algorithm and by continuous GPS stations. The selected volcanic test-sites are Etna, Vesuvio and Campi Flegrei, in Italy. From continuous GPS data, solutions are computed at the same days SAR data are acquired for direct comparison. Moreover, three dimensional GPS displacement vectors are projected along the radar line of sight of both ascending and descending acquisition orbits. GPS data are then compared with the coherent DInSAR pixels closest to the GPS station. Relevant statistics of the differences between the two measurements are computed and correlated to some scene parameter that may affect DInSAR accuracy (altitude, terrain slope, etc.).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neben, Abraham R.; Hewitt, Jacqueline N.; Dillon, Joshua S.
2016-03-20
Accurate antenna beam models are critical for radio observations aiming to isolate the redshifted 21 cm spectral line emission from the Dark Ages and the Epoch of Reionization (EOR) and unlock the scientific potential of 21 cm cosmology. Past work has focused on characterizing mean antenna beam models using either satellite signals or astronomical sources as calibrators, but antenna-to-antenna variation due to imperfect instrumentation has remained unexplored. We characterize this variation for the Murchison Widefield Array (MWA) through laboratory measurements and simulations, finding typical deviations of the order of ±10%–20% near the edges of the main lobe and in themore » sidelobes. We consider the ramifications of these results for image- and power spectrum-based science. In particular, we simulate visibilities measured by a 100 m baseline and find that using an otherwise perfect foreground model, unmodeled beam-forming errors severely limit foreground subtraction accuracy within the region of Fourier space contaminated by foreground emission (the “wedge”). This region likely contains much of the cosmological signal, and accessing it will require measurement of per-antenna beam patterns. However, unmodeled beam-forming errors do not contaminate the Fourier space region expected to be free of foreground contamination (the “EOR window”), showing that foreground avoidance remains a viable strategy.« less
Restoring method for missing data of spatial structural stress monitoring based on correlation
NASA Astrophysics Data System (ADS)
Zhang, Zeyu; Luo, Yaozhi
2017-07-01
Long-term monitoring of spatial structures is of great importance for the full understanding of their performance and safety. The missing part of the monitoring data link will affect the data analysis and safety assessment of the structure. Based on the long-term monitoring data of the steel structure of the Hangzhou Olympic Center Stadium, the correlation between the stress change of the measuring points is studied, and an interpolation method of the missing stress data is proposed. Stress data of correlated measuring points are selected in the 3 months of the season when missing data is required for fitting correlation. Data of daytime and nighttime are fitted separately for interpolation. For a simple linear regression when single point's correlation coefficient is 0.9 or more, the average error of interpolation is about 5%. For multiple linear regression, the interpolation accuracy is not significantly increased after the number of correlated points is more than 6. Stress baseline value of construction step should be calculated before interpolating missing data in the construction stage, and the average error is within 10%. The interpolation error of continuous missing data is slightly larger than that of the discrete missing data. The data missing rate of this method should better not exceed 30%. Finally, a measuring point's missing monitoring data is restored to verify the validity of the method.
Mathiasen, Ross; Hogrefe, Christopher; Harland, Kari; Peterson, Andrew; Smoot, M Kyle
2018-02-15
The Balance Error Scoring System (BESS) is a commonly used concussion assessment tool. Recent studies have questioned the stability and reliability of baseline BESS scores. The purpose of this longitudinal prospective cohort study is to examine differences in yearly baseline BESS scores in athletes participating on an NCAA Division-I football team. NCAA Division-I freshman football athletes were videotaped performing the BESS test at matriculation and after 1 year of participation in the football program. Twenty-three athletes were enrolled in year 1 of the study, and 25 athletes were enrolled in year 2. Those athletes enrolled in year 1 were again videotaped after year 2 of the study. The paired t-test was used to assess for change in score over time for the firm surface, foam surface, and the cumulative BESS score. Additionally, inter- and intrarater reliability values were calculated. Cumulative errors on the BESS significantly decreased from a mean of 20.3 at baseline to 16.8 after 1 year of participation. The mean number of errors following the second year of participation was 15.0. Inter-rater reliability for the cumulative score ranged from 0.65 to 0.75. Intrarater reliability was 0.81. After 1 year of participation, there is a statistically and clinically significant improvement in BESS scores in an NCAA Division-I football program. Although additional improvement in BESS scores was noted after a second year of participation, it did not reach statistical significance. Football athletes should undergo baseline BESS testing at least yearly if the BESS is to be optimally useful as a diagnostic test for concussion.
NASA Astrophysics Data System (ADS)
Brian Leen, J.; Berman, Elena S. F.; Liebson, Lindsay; Gupta, Manish
2012-04-01
Developments in cavity-enhanced absorption spectrometry have made it possible to measure water isotopes using faster, more cost-effective field-deployable instrumentation. Several groups have attempted to extend this technology to measure water extracted from plants and found that other extracted organics absorb light at frequencies similar to that absorbed by the water isotopomers, leading to δ2H and δ18O measurement errors (Δδ2H and Δδ18O). In this note, the off-axis integrated cavity output spectroscopy (ICOS) spectra of stable isotopes in liquid water is analyzed to determine the presence of interfering absorbers that lead to erroneous isotope measurements. The baseline offset of the spectra is used to calculate a broadband spectral metric, mBB, and the mean subtracted fit residuals in two regions of interest are used to determine a narrowband metric, mNB. These metrics are used to correct for Δδ2H and Δδ18O. The method was tested on 14 instruments and Δδ18O was found to scale linearly with contaminant concentration for both narrowband (e.g., methanol) and broadband (e.g., ethanol) absorbers, while Δδ2H scaled linearly with narrowband and as a polynomial with broadband absorbers. Additionally, the isotope errors scaled logarithmically with mNB. Using the isotope error versus mNB and mBB curves, Δδ2H and Δδ18O resulting from methanol contamination were corrected to a maximum mean absolute error of 0.93 ‰ and 0.25 ‰ respectively, while Δδ2H and Δδ18O from ethanol contamination were corrected to a maximum mean absolute error of 1.22 ‰ and 0.22 ‰. Large variation between instruments indicates that the sensitivities must be calibrated for each individual isotope analyzer. These results suggest that the properly calibrated interference metrics can be used to correct for polluted samples and extend off-axis ICOS measurements of liquid water to include plant waters, soil extracts, wastewater, and alcoholic beverages. The general technique may also be extended to other laser-based analyzers including methane and carbon dioxide isotope sensors.
Sport and team differences on baseline measures of sport-related concussion.
Zimmer, Adam; Piecora, Kyle; Schuster, Danielle; Webbe, Frank
2013-01-01
With the advent of the National Collegiate Athletic Association's (NCAA's) mandating the presence and practice of concussion-management plans in collegiate athletic programs, institutions will consider potential approaches for concussion management, including both baseline and normative comparison approaches. To examine sport and team differences in baseline performance on a computer-based neurocognitive measure and 2 standard sideline measures of cognition and balance and to determine the potential effect of premorbid factors sex and height on baseline performance. Cross-sectional study. University laboratory. A total of 437 NCAA Division II student-athletes (males = 273, females = 164; age = 19.61 ± 1.64 years, height = 69.89 ± 4.04 inches [177.52 ± 10.26 cm]) were recruited during mandatory preseason testing conducted in a concussion-management program. The computerized Concussion Resolution Index (CRI), the Standardized Assessment of Concussion (Form A; SAC), and the Balance Error Scoring System (BESS). Players on the men's basketball team tended to perform worse on the baseline measures, whereas soccer players tended to perform better. We found a difference in total BESS scores between these sports (P = .002). We saw a difference between sports on the hard-surface portion of the BESS (F6,347 = 3.33, P = .003, ηp(2) = 0.05). No sport, team, or sex differences were found with SAC scores (P > .05). We noted differences between sports and teams in the CRI indices, with basketball, particularly the men's team, performing worse than soccer (P < .001) and softball/baseball (P = .03). When sex and height were considered as possible sources of variation in BESS and CRI team or sport differences, height was a covariate for the team (F1,385 = 5.109, P = .02, ηp(2) = 0.013) and sport (F1,326 = 11.212, P = .001, ηp(2) = 0.033) analyses, but the interaction of sex and sport on CRI indices was not significant in any test (P > .05). Given that differences in neurocognitive functioning and performance among sports and teams exist, the comparison of posttraumatic and baseline assessment may lead to more accurate diagnoses of concussion and safer return-to-participation decision making than the use of normative comparisons.
Bezuidenhout, Karla; Rensburg, Megan A; Hudson, Careen L; Essack, Younus; Davids, M Razeen
2016-07-01
Many clinical laboratories require that specimens for serum and urine osmolality determination be processed within 3 h of sampling or need to arrive at the laboratory on ice. This protocol is based on the World Health Organization report on sample storage and stability, but the recommendation lacks good supporting data. We studied the effect of storage temperature and time on osmolality measurements. Blood and urine samples were obtained from 16 patients and 25 healthy volunteers. Baseline serum, plasma and urine osmolality measurements were performed within 30 min. Measurements were then made at 3, 6, 12, 24 and 36 h on samples stored at 4-8℃ and room temperature. We compared baseline values with subsequent measurements and used difference plots to illustrate changes in osmolality. At 4-8℃, serum and plasma osmolality were stable for up to 36 h. At room temperature, serum and plasma osmolality were very stable for up to 12 h. At 24 and 36 h, changes from baseline osmolality were statistically significant and exceeded the total allowable error of 1.5% but not the reference change value of 4.1%. Urine osmolality was extremely stable at room temperature with a mean change of less than 1 mosmol/kg at 36 h. Serum and plasma samples can be stored at room temperature for up to 36 h before measuring osmolality. Cooling samples to 4-8℃ may be useful when delays in measurement beyond 12 h are anticipated. Urine osmolality is extremely stable for up to 36 h at room temperature. © The Author(s) 2015.
Nair, Bala G; Peterson, Gene N; Newman, Shu-Fang; Wu, Wei-Ying; Kolios-Morris, Vickie; Schwid, Howard A
2012-06-01
Continuation of perioperative beta-blockers for surgical patients who are receiving beta-blockers prior to arrival for surgery is an important quality measure (SCIP-Card-2). For this measure to be considered successful, name, date, and time of the perioperative beta-blocker must be documented. Alternately, if the beta-blocker is not given, the medical reason for not administering must be documented. Before the study was conducted, the institution lacked a highly reliable process to document the date and time of self-administration of beta-blockers prior to hospital admission. Because of this, compliance with the beta-blocker quality measure was poor (-65%). To improve this measure, the anesthesia care team was made responsible for documenting perioperative beta-blockade. Clear documentation guidelines were outlined, and an electronic Anesthesia Information Management System (AIMS) was configured to facilitate complete documentation of the beta-blocker quality measure. In addition, real-time electronic alerts were generated using Smart Anesthesia Messenger (SAM), an internally developed decision-support system, to notify users concerning incomplete beta-blocker documentation. Weekly compliance for perioperative beta-blocker documentation before the study was 65.8 +/- 16.6%, which served as the baseline value. When the anesthesia care team started documenting perioperative beta-blocker in AIMS, compliance was 60.5 +/- 8.6% (p = .677 as compared with baseline). Electronic alerts with SAM improved documentation compliance to 94.6 +/- 3.5% (p < .001 as compared with baseline). To achieve high compliance for the beta-blocker measure, it is essential to (1) clearly assign a medical team to perform beta-blocker documentation and (2) enhance features in the electronic medical systems to alert the user concerning incomplete documentation.
Myopia, contact lens use and self-esteem
Dias, Lynette; Manny, Ruth E; Weissberg, Erik; Fern, Karen D
2013-01-01
Purpose To evaluate whether contact lens (CL) use was associated with self-esteem in myopic children originally enrolled in the Correction of Myopia Evaluation Trial (COMET), that after five years continued as an observational study of myopia progression with CL use permitted. Methods Usable data at the six-year visit, one year after CL use was allowed (n = 423/469, age 12-17 years), included questions on CL use, refractive error measurements and self-reported self-esteem in several areas (scholastic/athletic competence, physical appearance, social acceptance, behavioural conduct and global self-worth). Self-esteem, scored from 1 (low) to 4 (high), was measured by the Self-Perception Profile for Children in participants under 14 years or the Self-Perception Profile for Adolescents, in those 14 years and older. Multiple regression analyses were used to evaluate associations between self-esteem and relevant factors identified by univariate analyses (e.g., CL use, gender, ethnicity), while adjusting for baseline self-esteem prior to CL use. Results Mean (±SD) self-esteem scores at the six-year visit (mean age=15.3±1.3 years; mean refractive error= −4.6 ±1.5D) ranged from 2.74 (± 0.76) on athletic competence to 3.33 (± 0.53) on global self-worth. CL wearers (n=224) compared to eyeglass wearers (n=199) were more likely to be female (p<0.0001). Those who chose to wear CLs had higher social acceptance, athletic competence and behavioural conduct scores (p < 0.05) at baseline compared to eyeglass users. CL users continued to report higher social acceptance scores at the six-year visit (p=0.03), after adjusting for baseline scores and other covariates. Ethnicity was also independently associated with social acceptance in the multivariable analyses (p=0.011); African-Americans had higher scores than Asians, Whites and Hispanics. Age and refractive error were not associated with self-esteem or CL use. Conclusions COMET participants who chose to wear CLs after five years of eyeglass use had higher self-esteem compared to those who remained in glasses both preceding and following CL use. This suggests that self-esteem may influence the decision to wear CLs and that CLs in turn are associated with higher self-esteem in individuals most likely to wear them. PMID:23763482
Does Exercise Improve Cognitive Performance? A Conservative Message from Lord's Paradox
Liu, Sicong; Lebeau, Jean-Charles; Tenenbaum, Gershon
2016-01-01
Although extant meta-analyses support the notion that exercise results in cognitive performance enhancement, methodology shortcomings are noted among primary evidence. The present study examined relevant randomized controlled trials (RCTs) published in the past 20 years (1996–2015) for methodological concerns arise from Lord's paradox. Our analysis revealed that RCTs supporting the positive effect of exercise on cognition are likely to include Type I Error(s). This result can be attributed to the use of gain score analysis on pretest-posttest data as well as the presence of control group superiority over the exercise group on baseline cognitive measures. To improve accuracy of causal inferences in this area, analysis of covariance on pretest-posttest data is recommended under the assumption of group equivalence. Important experimental procedures are discussed to maintain group equivalence. PMID:27493637
Fitting Photometry of Blended Microlensing Events
NASA Astrophysics Data System (ADS)
Thomas, Christian L.; Griest, Kim
2006-03-01
We reexamine the usefulness of fitting blended light-curve models to microlensing photometric data. We find agreement with previous workers (e.g., Woźniak & Paczyński) that this is a difficult proposition because of the degeneracy of blend fraction with other fit parameters. We show that follow-up observations at specific point along the light curve (peak region and wings) of high-magnification events are the most helpful in removing degeneracies. We also show that very small errors in the baseline magnitude can result in problems in measuring the blend fraction and study the importance of non-Gaussian errors in the fit results. The biases and skewness in the distribution of the recovered blend fraction is discussed. We also find a new approximation formula relating the blend fraction and the unblended fit parameters to the underlying event duration needed to estimate microlensing optical depth.
Modeling, Analysis, and Control of Demand Response Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mathieu, Johanna L.
2012-05-01
While the traditional goal of an electric power system has been to control supply to fulfill demand, the demand-side can plan an active role in power systems via Demand Response (DR), defined by the Department of Energy (DOE) as “a tariff or program established to motivate changes in electric use by end-use customers in response to changes in the price of electricity over time, or to give incentive payments designed to induce lower electricity use at times of high market prices or when grid reliability is jeopardized” [29]. DR can provide a variety of benefits including reducing peak electric loadsmore » when the power system is stressed and fast timescale energy balancing. Therefore, DR can improve grid reliability and reduce wholesale energy prices and their volatility. This dissertation focuses on analyzing both recent and emerging DR paradigms. Recent DR programs have focused on peak load reduction in commercial buildings and industrial facilities (C&I facilities). We present methods for using 15-minute-interval electric load data, commonly available from C&I facilities, to help building managers understand building energy consumption and ‘ask the right questions’ to discover opportunities for DR. Additionally, we present a regression-based model of whole building electric load, i.e., a baseline model, which allows us to quantify DR performance. We use this baseline model to understand the performance of 38 C&I facilities participating in an automated dynamic pricing DR program in California. In this program, facilities are expected to exhibit the same response each DR event. We find that baseline model error makes it difficult to precisely quantify changes in electricity consumption and understand if C&I facilities exhibit event-to-event variability in their response to DR signals. Therefore, we present a method to compute baseline model error and a metric to determine how much observed DR variability results from baseline model error rather than real variability in response. We find that, in general, baseline model error is large. Though some facilities exhibit real DR variability, most observed variability results from baseline model error. In some cases, however, aggregations of C&I facilities exhibit real DR variability, which could create challenges for power system operation. These results have implications for DR program design and deployment. Emerging DR paradigms focus on faster timescale DR. Here, we investigate methods to coordinate aggregations of residential thermostatically controlled loads (TCLs), including air conditioners and refrigerators, to manage frequency and energy imbalances in power systems. We focus on opportunities to centrally control loads with high accuracy but low requirements for sensing and communications infrastructure. Specifically, we compare cases when measured load state information (e.g., power consumption and temperature) is 1) available in real time; 2) available, but not in real time; and 3) not available. We develop Markov Chain models to describe the temperature state evolution of heterogeneous populations of TCLs, and use Kalman filtering for both state and joint parameter/state estimation. We present a look-ahead proportional controller to broadcast control signals to all TCLs, which always remain in their temperature dead-band. Simulations indicate that it is possible to achieve power tracking RMS errors in the range of 0.26–9.3% of steady state aggregated power consumption. Results depend upon the information available for system identification, state estimation, and control. We find that, depending upon the performance required, TCLs may not need to provide state information to the central controller in real time or at all. We also estimate the size of the TCL potential resource; potential revenue from participation in markets; and break-even costs associated with deploying DR-enabling technologies. We find that current TCL energy storage capacity in California is 8–11 GWh, with refrigerators contributing the most. Annual revenues from participation in regulation vary from $10 to $220 per TCL per year depending upon the type of TCL and climate zone, while load following and arbitrage revenues are more modest at $2 to $35 per TCL per year. These results lead to a number of policy recommendations that will make it easier to engage residential loads in fast timescale DR.« less
Neutrinos help reconcile Planck measurements with the local universe.
Wyman, Mark; Rudd, Douglas H; Vanderveld, R Ali; Hu, Wayne
2014-02-07
Current measurements of the low and high redshift Universe are in tension if we restrict ourselves to the standard six-parameter model of flat ΛCDM. This tension has two parts. First, the Planck satellite data suggest a higher normalization of matter perturbations than local measurements of galaxy clusters. Second, the expansion rate of the Universe today, H0, derived from local distance-redshift measurements is significantly higher than that inferred using the acoustic scale in galaxy surveys and the Planck data as a standard ruler. The addition of a sterile neutrino species changes the acoustic scale and brings the two into agreement; meanwhile, adding mass to the active neutrinos or to a sterile neutrino can suppress the growth of structure, bringing the cluster data into better concordance as well. For our fiducial data set combination, with statistical errors for clusters, a model with a massive sterile neutrino shows 3.5σ evidence for a nonzero mass and an even stronger rejection of the minimal model. A model with massive active neutrinos and a massless sterile neutrino is similarly preferred. An eV-scale sterile neutrino mass--of interest for short baseline and reactor anomalies--is well within the allowed range. We caution that (i) unknown astrophysical systematic errors in any of the data sets could weaken this conclusion, but they would need to be several times the known errors to eliminate the tensions entirely; (ii) the results we find are at some variance with analyses that do not include cluster measurements; and (iii) some tension remains among the data sets even when new neutrino physics is included.
Variations of pupil centration and their effects on video eye tracking.
Wildenmann, Ulrich; Schaeffel, Frank
2013-11-01
To evaluate measurement errors that are introduced in video eye tracking when pupil centration changes with pupil size. Software was developed under Visual C++ to track both pupil centre and corneal centre at 87 Hz sampling rate at baseline pupil sizes of 4.75 mm (800 lux room illuminance) and while pupil constrictions were elicited by a flashlight. Corneal centres were determined by a circle fit through the pixels detected at the corneal margin by an edge detection algorithm. Standard deviations for repeated measurements were ± 0.04 mm for horizontal pupil centre position and ± 0.04 mm for horizontal corneal centre positions and ±0.03 mm for vertical pupil centre position and ± 0.05 mm for vertical corneal centre position. Ten subjects were tested (five female, five male, age 25-58 years). At 4 mm pupil sizes, the pupils were nasally decentred relative to the corneal centre by 0.18 ± 0.19 mm in the right eyes and -0.14 ± 0.22 mm in the left eyes. Vertical decentrations were 0.30 ± 0.30 mm and 0.27 ± 0.29 mm, respectively, always in a superior direction. At baseline pupil sizes (the natural pupil sizes at 800 lux) of 4.75 ± 0.52 mm, the decentrations became less (right and left eyes: horizontal 0.17 ± 0.20 mm and -0.12 ± 0.22 mm, and vertical 0.26 ± 0.28 mm and 0.20 ± 0.25 mm). While pupil decentration changed minimally in eight of the subjects, it shifted considerably in two others. Averaged over all subjects, the shift of the pupil centre position per millimetre pupil constriction was not significant (right and left eyes: -0.03 ± 0.07 mm and 0.03 ± 0.04 mm nasally per mm pupil size change, respectively, and -0.04 ± 0.06 mm and -0.05 ± 0.12 mm superiorly). Direction and magnitude of the changes in pupil centration could not be predicted from the initial decentration at baseline pupil sizes. In line with data in the literature, the pupil centre was significantly decentred relative to the corneal centre in the nasal and superior direction. Pupil decentration changed significantly with pupil size by 0.05 mm on average for 1 mm of constriction. Assuming a Hirschberg ratio of 12° mm(-1) , a shift of 0.05 mm is equivalent to a measurement error in a Purkinje image-based eye tracker of 0.6°. However, the induced measurement error could also exceed 1.5° in some subjects for only a 1 mm change in pupil size. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
An evaluation of space time cube representation of spatiotemporal patterns.
Kristensson, Per Ola; Dahlbäck, Nils; Anundi, Daniel; Björnstad, Marius; Gillberg, Hanna; Haraldsson, Jonas; Mårtensson, Ingrid; Nordvall, Mathias; Ståhl, Josefine
2009-01-01
Space time cube representation is an information visualization technique where spatiotemporal data points are mapped into a cube. Information visualization researchers have previously argued that space time cube representation is beneficial in revealing complex spatiotemporal patterns in a data set to users. The argument is based on the fact that both time and spatial information are displayed simultaneously to users, an effect difficult to achieve in other representations. However, to our knowledge the actual usefulness of space time cube representation in conveying complex spatiotemporal patterns to users has not been empirically validated. To fill this gap, we report on a between-subjects experiment comparing novice users' error rates and response times when answering a set of questions using either space time cube or a baseline 2D representation. For some simple questions, the error rates were lower when using the baseline representation. For complex questions where the participants needed an overall understanding of the spatiotemporal structure of the data set, the space time cube representation resulted in on average twice as fast response times with no difference in error rates compared to the baseline. These results provide an empirical foundation for the hypothesis that space time cube representation benefits users analyzing complex spatiotemporal patterns.
Triano, John J; Giuliano, Dominic; Kanga, Ismat; Starmer, David; Brazeau, Jennifer; Screaton, C Elaine; Semple, Curtis
2015-01-01
The purpose of this study was to sample the stability of spinal manipulation performance in peak impulse force development over time and the ability of clinicians to adapt to arbitrary target levels with short-duration training. A pre-post experimental design was used. Human analog mannequins provided standardized simulation for performance measures. A convenience sample was recruited consisting of 41 local doctors of chiropractic with 5 years of active clinical practice experience. Thoracic impulse force was measured among clinicians at baseline, after 4 months at pretraining, and again posttraining. Intraclass correlation coefficient values and within-subject variability defined consistency. Malleability was measured by reduction of error (paired t tests) in achieving arbitrary targeted levels of force development normalized to the individual's typical performance. No difference was observed in subgroup vs baseline group characteristics. Good consistency was observed in force-time profiles (0.55 ≤ intraclass correlation coefficient ≤ 0.75) for force parameters over the 4-month interval. With short intervals of focused training, error rates in force delivery were reduced by 23% to 45%, depending on target. Within-subject variability was 1/3 to 1/2 that of between-subject variability. Load increases were directly related to rate of loading. The findings of this study show that recalibration of spinal manipulation performance of experienced clinicians toward arbitrary target values in the thoracic spine is feasible. This study found that experienced clinicians are internally consistent in performance of procedures under standardized conditions and that focused training may help clinicians learn to modulate procedure characteristics. Copyright © 2015 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.
Shilton, Michael; Branney, Jonathan; de Vries, Bas Penning; Breen, Alan C
2015-01-01
The association between cervical lordosis (sagittal alignment) and neck pain is controversial. Further, it is unclear whether spinal manipulative therapy can change cervical lordosis. This study aimed to determine whether cervical lordosis changes after a course of spinal manipulation for non-specific neck pain. Posterior tangents of C2 and C6 were drawn on the lateral cervical fluoroscopic images of 29 patients with subacute/chronic non-specific neck pain and 30 healthy volunteers matched for age and gender, recruited August 2011 to April 2013. The resultant angle was measured using 'Image J' digital geometric software. The intra-observer repeatability (measurement error and reliability) and intra-subject repeatability (minimum detectable change (MDC) over 4 weeks) were determined in healthy volunteers. A comparison of cervical lordosis was made between patients and healthy volunteers at baseline. Change in lordosis between baseline and 4-week follow-up was determined in patients receiving spinal manipulation. Intra-observer measurement error for cervical lordosis was acceptable (SEM 3.6°) and reliability was substantial ICC 0.98, 95 % CI 0.962-0991). The intra-subject MDC however, was large (13.5°). There was no significant difference between lordotic angles in patients and healthy volunteers (p = 0.16). The mean cervical lordotic increase over 4 weeks in patients was 2.1° (9.2) which was not significant (p = 0.12). This study found no difference in cervical lordosis (sagittal alignment) between patients with mild non-specific neck pain and matched healthy volunteers. Furthermore, there was no significant change in cervical lordosis in patients after 4 weeks of cervical spinal manipulation.
A Psychological Model for Aggregating Judgments of Magnitude
NASA Astrophysics Data System (ADS)
Merkle, Edgar C.; Steyvers, Mark
In this paper, we develop and illustrate a psychologically-motivated model for aggregating judgments of magnitude across experts. The model assumes that experts' judgments are perturbed from the truth by both systematic biases and random error, and it provides aggregated estimates that are implicitly based on the application of nonlinear weights to individual judgments. The model is also easily extended to situations where experts report multiple quantile judgments. We apply the model to expert judgments concerning flange leaks in a chemical plant, illustrating its use and comparing it to baseline measures.
A Randomized Trial of Soft Multifocal Contact Lenses for Myopia Control: Baseline Data and Methods.
Walline, Jeffrey J; Gaume Giannoni, Amber; Sinnott, Loraine T; Chandler, Moriah A; Huang, Juan; Mutti, Donald O; Jones-Jordan, Lisa A; Berntsen, David A
2017-09-01
The Bifocal Lenses In Nearsighted Kids (BLINK) study is the first soft multifocal contact lens myopia control study to compare add powers and measure peripheral refractive error in the vertical meridian, so it will provide important information about the potential mechanism of myopia control. The BLINK study is a National Eye Institute-sponsored, double-masked, randomized clinical trial to investigate the effects of soft multifocal contact lenses on myopia progression. This article describes the subjects' baseline characteristics and study methods. Subjects were 7 to 11 years old, had -0.75 to -5.00 spherical component and less than 1.00 diopter (D) astigmatism, and had 20/25 or better logMAR distance visual acuity with manifest refraction in each eye and with +2.50-D add soft bifocal contact lenses on both eyes. Children were randomly assigned to wear Biofinity single-vision, Biofinity Multifocal "D" with a +1.50-D add power, or Biofinity Multifocal "D" with a +2.50-D add power contact lenses. We examined 443 subjects at the baseline visits, and 294 (66.4%) subjects were enrolled. Of the enrolled subjects, 177 (60.2%) were female, and 200 (68%) were white. The mean (± SD) age was 10.3 ± 1.2 years, and 117 (39.8%) of the eligible subjects were younger than 10 years. The mean spherical equivalent refractive error, measured by cycloplegic autorefraction was -2.39 ± 1.00 D. The best-corrected binocular logMAR visual acuity with glasses was +0.01 ± 0.06 (20/21) at distance and -0.03 ± 0.08 (20/18) at near. The BLINK study subjects are similar to patients who would routinely be eligible for myopia control in practice, so the results will provide clinical information about soft bifocal contact lens myopia control as well as information about the mechanism of the treatment effect, if one occurs.
The impact of modelling errors on interferometer calibration for 21 cm power spectra
NASA Astrophysics Data System (ADS)
Ewall-Wice, Aaron; Dillon, Joshua S.; Liu, Adrian; Hewitt, Jacqueline
2017-09-01
We study the impact of sky-based calibration errors from source mismodelling on 21 cm power spectrum measurements with an interferometer and propose a method for suppressing their effects. While emission from faint sources that are not accounted for in calibration catalogues is believed to be spectrally smooth, deviations of true visibilities from model visibilities are not, due to the inherent chromaticity of the interferometer's sky response (the 'wedge'). Thus, unmodelled foregrounds, below the confusion limit of many instruments, introduce frequency structure into gain solutions on the same line-of-sight scales on which we hope to observe the cosmological signal. We derive analytic expressions describing these errors using linearized approximations of the calibration equations and estimate the impact of this bias on measurements of the 21 cm power spectrum during the epoch of reionization. Given our current precision in primary beam and foreground modelling, this noise will significantly impact the sensitivity of existing experiments that rely on sky-based calibration. Our formalism describes the scaling of calibration with array and sky-model parameters and can be used to guide future instrument design and calibration strategy. We find that sky-based calibration that downweights long baselines can eliminate contamination in most of the region outside of the wedge with only a modest increase in instrumental noise.
Automated time series forecasting for biosurveillance.
Burkom, Howard S; Murphy, Sean Patrick; Shmueli, Galit
2007-09-30
For robust detection performance, traditional control chart monitoring for biosurveillance is based on input data free of trends, day-of-week effects, and other systematic behaviour. Time series forecasting methods may be used to remove this behaviour by subtracting forecasts from observations to form residuals for algorithmic input. We describe three forecast methods and compare their predictive accuracy on each of 16 authentic syndromic data streams. The methods are (1) a non-adaptive regression model using a long historical baseline, (2) an adaptive regression model with a shorter, sliding baseline, and (3) the Holt-Winters method for generalized exponential smoothing. Criteria for comparing the forecasts were the root-mean-square error, the median absolute per cent error (MedAPE), and the median absolute deviation. The median-based criteria showed best overall performance for the Holt-Winters method. The MedAPE measures over the 16 test series averaged 16.5, 11.6, and 9.7 for the non-adaptive regression, adaptive regression, and Holt-Winters methods, respectively. The non-adaptive regression forecasts were degraded by changes in the data behaviour in the fixed baseline period used to compute model coefficients. The mean-based criterion was less conclusive because of the effects of poor forecasts on a small number of calendar holidays. The Holt-Winters method was also most effective at removing serial autocorrelation, with most 1-day-lag autocorrelation coefficients below 0.15. The forecast methods were compared without tuning them to the behaviour of individual series. We achieved improved predictions with such tuning of the Holt-Winters method, but practical use of such improvements for routine surveillance will require reliable data classification methods.
Deformation Estimation In Non-Urban Areas Exploiting High Resolution SAR Data
NASA Astrophysics Data System (ADS)
Goel, Kanika; Adam, Nico
2012-01-01
Advanced techniques such as the Small Baseline Subset Algorithm (SBAS) have been developed for terrain motion mapping in non-urban areas with a focus on extracting information from distributed scatterers (DSs). SBAS uses small baseline differential interferograms (to limit the effects of geometric decorrelation) and these are typically multilooked to reduce phase noise, resulting in loss of resolution. Various error sources e.g. phase unwrapping errors, topographic errors, temporal decorrelation and atmospheric effects also affect the interferometric phase. The aim of our work is an improved deformation monitoring in non-urban areas exploiting high resolution SAR data. The paper provides technical details and a processing example of a newly developed technique which incorporates an adaptive spatial phase filtering algorithm for an accurate high resolution differential interferometric stacking, followed by deformation retrieval via the SBAS approach where we perform the phase inversion using a more robust L1 norm minimization.
NASA Astrophysics Data System (ADS)
Selb, Juliette; Ogden, Tyler M.; Dubb, Jay; Fang, Qianqian; Boas, David A.
2013-03-01
Time-domain near-infrared spectroscopy (TD-NIRS) offers the ability to measure the absolute baseline optical properties of a tissue. Specifically, for brain imaging, the robust assessment of cerebral blood volume and oxygenation based on measurement of cerebral hemoglobin concentrations is essential for reliable cross-sectional and longitudinal studies. In adult heads, these baseline measurements are complicated by the presence of thick extra-cerebral tissue (scalp, skull, CSF). A simple semi-infinite homogeneous model of the head has proven to have limited use because of the large errors it introduces in the recovered brain absorption. Analytical solutions for layered media have shown improved performance on Monte-Carlo simulated data and layered phantom experiments, but their validity on real adult head data has never been demonstrated. With the advance of fast Monte Carlo approaches based on GPU computation, numerical methods to solve the radiative transfer equation become viable alternatives to analytical solutions of the diffusion equation. Monte Carlo approaches provide the additional advantage to be adaptable to any geometry, in particular more realistic head models. The goals of the present study were twofold: (1) to implement a fast and flexible Monte Carlo-based fitting routine to retrieve the brain optical properties; (2) to characterize the performances of this fitting method on realistic adult head data. We generated time-resolved data at various locations over the head, and fitted them with different models of light propagation: the homogeneous analytical model, and Monte Carlo simulations for three head models: a two-layer slab, the true subject's anatomy, and that of a generic atlas head. We found that the homogeneous model introduced a median 20 to 25% error on the recovered brain absorption, with large variations over the range of true optical properties. The two-layer slab model only improved moderately the results over the homogeneous one. On the other hand, using a generic atlas head registered to the subject's head surface decreased the error by a factor of 2. When the information is available, using the true subject anatomy offers the best performance.
Lamb, Edmund J; Brettell, Elizabeth A; Cockwell, Paul; Dalton, Neil; Deeks, Jon J; Harris, Kevin; Higgins, Tracy; Kalra, Philip A; Khunti, Kamlesh; Loud, Fiona; Ottridge, Ryan S; Sharpe, Claire C; Sitch, Alice J; Stevens, Paul E; Sutton, Andrew J; Taal, Maarten W
2014-01-14
Uncertainty exists regarding the optimal method to estimate glomerular filtration rate (GFR) for disease detection and monitoring. Widely used GFR estimates have not been validated in British ethnic minority populations. Iohexol measured GFR will be the reference against which each estimating equation will be compared. The estimating equations will be based upon serum creatinine and/or cystatin C. The eGFR-C study has 5 components: 1) A prospective longitudinal cohort study of 1300 adults with stage 3 chronic kidney disease followed for 3 years with reference (measured) GFR and test (estimated GFR [eGFR] and urinary albumin-to-creatinine ratio) measurements at baseline and 3 years. Test measurements will also be undertaken every 6 months. The study population will include a representative sample of South-Asians and African-Caribbeans. People with diabetes and proteinuria (ACR ≥30 mg/mmol) will comprise 20-30% of the study cohort.2) A sub-study of patterns of disease progression of 375 people (125 each of Caucasian, Asian and African-Caribbean origin; in each case containing subjects at high and low risk of renal progression). Additional reference GFR measurements will be undertaken after 1 and 2 years to enable a model of disease progression and error to be built.3) A biological variability study to establish reference change values for reference and test measures.4) A modelling study of the performance of monitoring strategies on detecting progression, utilising estimates of accuracy, patterns of disease progression and estimates of measurement error from studies 1), 2) and 3).5) A comprehensive cost database for each diagnostic approach will be developed to enable cost-effectiveness modelling of the optimal strategy.The performance of the estimating equations will be evaluated by assessing bias, precision and accuracy. Data will be modelled as a linear function of time utilising all available (maximum 7) time points compared with the difference between baseline and final reference values. The percentage of participants demonstrating large error with the respective estimating equations will be compared. Predictive value of GFR estimates and albumin-to-creatinine ratio will be compared amongst subjects that do or do not show progressive kidney function decline. The eGFR-C study will provide evidence to inform the optimal GFR estimate to be used in clinical practice. ISRCTN42955626.
Decroos, Francis Char; Stinnett, Sandra S; Heydary, Cynthia S; Burns, Russell E; Jaffe, Glenn J
2013-11-01
To determine the impact of segmentation error correction and precision of standardized grading of time domain optical coherence tomography (OCT) scans obtained during an interventional study for macular edema secondary to central retinal vein occlusion (CRVO). A reading center team of two readers and a senior reader evaluated 1199 OCT scans. Manual segmentation error correction (SEC) was performed. The frequency of SEC, resulting change in central retinal thickness after SEC, and reproducibility of SEC were quantified. Optical coherence tomography characteristics associated with the need for SECs were determined. Reading center teams graded all scans, and the reproducibility of this evaluation for scan quality at the fovea and cystoid macular edema was determined on 97 scans. Segmentation errors were observed in 360 (30.0%) scans, of which 312 were interpretable. On these 312 scans, the mean machine-generated central subfield thickness (CST) was 507.4 ± 208.5 μm compared to 583.0 ± 266.2 μm after SEC. Segmentation error correction resulted in a mean absolute CST correction of 81.3 ± 162.0 μm from baseline uncorrected CST. Segmentation error correction was highly reproducible (intraclass correlation coefficient [ICC] = 0.99-1.00). Epiretinal membrane (odds ratio [OR] = 2.3, P < 0.0001), subretinal fluid (OR = 2.1, P = 0.0005), and increasing CST (OR = 1.6 per 100-μm increase, P < 0.001) were associated with need for SEC. Reading center teams reproducibly graded scan quality at the fovea (87% agreement, kappa = 0.64, 95% confidence interval [CI] 0.45-0.82) and cystoid macular edema (92% agreement, kappa = 0.84, 95% CI 0.74-0.94). Optical coherence tomography images obtained during an interventional CRVO treatment trial can be reproducibly graded. Segmentation errors can cause clinically meaningful deviation in central retinal thickness measurements; however, these errors can be corrected reproducibly in a reading center setting. Segmentation errors are common on these images, can cause clinically meaningful errors in central retinal thickness measurement, and can be corrected reproducibly in a reading center setting.
Some unexamined aspects of analysis of covariance in pretest-posttest studies.
Ganju, Jitendra
2004-09-01
The use of an analysis of covariance (ANCOVA) model in a pretest-posttest setting deserves to be studied separately from its use in other (non-pretest-posttest) settings. For pretest-posttest studies, the following points are made in this article: (a) If the familiar change from baseline model accurately describes the data-generating mechanism for a randomized study then it is impossible for unequal slopes to exist. Conversely, if unequal slopes exist, then it implies that the change from baseline model as a data-generating mechanism is inappropriate. An alternative data-generating model should be identified and the validity of the ANCOVA model should be demonstrated. (b) Under the usual assumptions of equal pretest and posttest within-subject error variances, the ratio of the standard error of a treatment contrast from a change from baseline analysis to that from ANCOVA is less than 2(1)/(2). (c) For an observational study it is possible for unequal slopes to exist even if the change from baseline model describes the data-generating mechanism. (d) Adjusting for the pretest variable in observational studies may actually introduce bias where none previously existed.
Wang, Wei; Young, Bessie A.; Fülöp, Tibor; de Boer, Ian H.; Boulware, L. Ebony; Katz, Ronit; Correa, Adolfo; Griswold, Michael E.
2015-01-01
Background The calibration to Isotope Dilution Mass Spectroscopy (IDMS) traceable creatinine is essential for valid use of the new Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation to estimate the glomerular filtration rate (GFR). Methods For 5,210 participants in the Jackson Heart Study (JHS), serum creatinine was measured with a multipoint enzymatic spectrophotometric assay at the baseline visit (2000–2004) and re-measured using the Roche enzymatic method, traceable to IDMS in a subset of 206 subjects. The 200 eligible samples (6 were excluded, 1 for failure of the re-measurement and 5 for outliers) were divided into three disjoint sets - training, validation, and test - to select a calibration model, estimate true errors, and assess performance of the final calibration equation. The calibration equation was applied to serum creatinine measurements of 5,210 participants to estimate GFR and the prevalence of CKD. Results The selected Deming regression model provided a slope of 0.968 (95% Confidence Interval (CI), 0.904 to 1.053) and intercept of −0.0248 (95% CI, −0.0862 to 0.0366) with R squared 0.9527. Calibrated serum creatinine showed high agreement with actual measurements when applying to the unused test set (concordance correlation coefficient 0.934, 95% CI, 0.894 to 0.960). The baseline prevalence of CKD in the JHS (2000–2004) was 6.30% using calibrated values, compared with 8.29% using non-calibrated serum creatinine with the CKD-EPI equation (P < 0.001). Conclusions A Deming regression model was chosen to optimally calibrate baseline serum creatinine measurements in the JHS and the calibrated values provide a lower CKD prevalence estimate. PMID:25806862
The problem of isotopic baseline: Reconstructing the diet and trophic position of fossil animals
NASA Astrophysics Data System (ADS)
Casey, Michelle M.; Post, David M.
2011-05-01
Stable isotope methods are powerful, frequently used tools which allow diet and trophic position reconstruction of organisms and the tracking of energy sources through ecosystems. The majority of ecosystems have multiple food sources which have distinct carbon and nitrogen isotopic signatures despite occupying a single trophic level. This difference in the starting isotopic composition of primary producers sets up an isotopic baseline that needs to be accounted for when calculating diet or trophic position using stable isotopic methods. This is particularly important when comparing animals from different regions or different times. Failure to do so can cause erroneous estimations of diet or trophic level, especially for organisms with mixed diets. The isotopic baseline is known to vary seasonally and in concert with a host of physical and chemical variables such as mean annual rainfall, soil maturity, and soil pH in terrestrial settings and lake size, depth, and distance from shore in aquatic settings. In the fossil record, the presence of shallowing upward suites of rock, or parasequences, will have a considerable impact on the isotopic baseline as basin size, depth and distance from shore change simultaneously with stratigraphic depth. For this reason, each stratigraphic level is likely to need an independent estimation of baseline even within a single outcrop. Very little is known about the scope of millennial or decadal variation in isotopic baseline. Without multi-year data on the nature of isotopic baseline variation, the impacts of time averaging on our ability to resolve trophic relationships in the fossil record will remain unclear. The use of a time averaged baseline will increase the amount of error surrounding diet and trophic position reconstructions. Where signal to noise ratios are low, due to low end member disparity (e.g., aquatic systems), or where the observed isotopic shift is small (≤ 1‰) the error introduced by time averaging may severely inhibit the scope of one's interpretations and limit the types of questions one can reliably answer. In situations with strong signal strength, resulting from high amounts of end member disparity (e.g., terrestrial settings), this additional error maybe surmountable. Baseline variation that is adequately characterized can be dealt with by applying multiple end-member mixing models.
NASA Astrophysics Data System (ADS)
Reeves, Jessica A.; Knight, Rosemary; Zebker, Howard A.; Schreüder, Willem A.; Shanker Agram, Piyush; Lauknes, Tom R.
2011-12-01
In the San Luis Valley (SLV), Colorado legislation passed in 2004 requires that hydraulic head levels in the confined aquifer system stay within the range experienced in the years 1978-2000. While some measurements of hydraulic head exist, greater spatial and temporal sampling would be very valuable in understanding the behavior of the system. Interferometric synthetic aperture radar (InSAR) data provide fine spatial resolution measurements of Earth surface deformation, which can be related to hydraulic head change in the confined aquifer system. However, change in cm-scale crop structure with time leads to signal decorrelation, resulting in low quality data. Here we apply small baseline subset (SBAS) analysis to InSAR data collected from 1992 to 2001. We are able to show high levels of correlation, denoting high quality data, in areas between the center pivot irrigation circles, where the lack of water results in little surface vegetation. At three well locations we see a seasonal variation in the InSAR data that mimics the hydraulic head data. We use measured values of the elastic skeletal storage coefficient to estimate hydraulic head from the InSAR data. In general the magnitude of estimated and measured head agree to within the calculated error. However, the errors are unacceptably large due to both errors in the InSAR data and uncertainty in the measured value of the elastic skeletal storage coefficient. We conclude that InSAR is capturing the seasonal head variation, but that further research is required to obtain accurate hydraulic head estimates from the InSAR deformation measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katsuta, Y; Tohoku University Graduate School of Medicine, Sendal, Miyagi; Kadoya, N
Purpose: In this study, we developed a system to calculate three dimensional (3D) dose that reflects dosimetric error caused by leaf miscalibration for head and neck and prostate volumetric modulated arc therapy (VMAT) without additional treatment planning system calculation on real time. Methods: An original system called clarkson dose calculation based dosimetric error calculation to calculate dosimetric error caused by leaf miscalibration was developed by MATLAB (Math Works, Natick, MA). Our program, first, calculates point doses at isocenter for baseline and modified VMAT plan, which generated by inducing MLC errors that enlarged aperture size of 1.0 mm with clarkson dosemore » calculation. Second, error incuced 3D dose was generated with transforming TPS baseline 3D dose using calculated point doses. Results: Mean computing time was less than 5 seconds. For seven head and neck and prostate plans, between our method and TPS calculated error incuced 3D dose, the 3D gamma passing rates (0.5%/2 mm, global) are 97.6±0.6% and 98.0±0.4%. The dose percentage change with dose volume histogram parameter of mean dose on target volume were 0.1±0.5% and 0.4±0.3%, and with generalized equivalent uniform dose on target volume were −0.2±0.5% and 0.2±0.3%. Conclusion: The erroneous 3D dose calculated by our method is useful to check dosimetric error caused by leaf miscalibration before pre treatment patient QA dosimetry checks.« less
NASA Astrophysics Data System (ADS)
Hunter, Todd R.; Lucas, Robert; Broguière, Dominique; Fomalont, Ed B.; Dent, William R. F.; Phillips, Neil; Rabanus, David; Vlahakis, Catherine
2016-07-01
In a radio interferometer, the geometrical antenna positions are determined from measurements of the observed delay to each antenna from observations across the sky of many point sources whose positions are known to high accuracy. The determination of accurate antenna positions relies on accurate calibration of the dry and wet delay of the atmosphere above each antenna. For the Atacama Large Millimeter/Submillimeter Array (ALMA), with baseline lengths up to 15 kilometers, the geography of the site forces the height above mean sea level of the more distant antenna pads to be significantly lower than the central array. Thus, both the ground level meteorological values and the total water column can be quite different between antennas in the extended configurations. During 2015, a network of six additional weather stations was installed to monitor pressure, temperature, relative humidity and wind velocity, in order to test whether inclusion of these parameters could improve the repeatability of antenna position determinations in these configurations. We present an analysis of the data obtained during the ALMA Long Baseline Campaign of October through November 2015. The repeatability of antenna position measurements typically degrades as a function of antenna distance. Also, the scatter is more than three times worse in the vertical direction than in the local tangent plane, suggesting that a systematic effect is limiting the measurements. So far we have explored correcting the delay model for deviations from hydrostatic equilibrium in the measured air pressure and separating the partial pressure of water from the total pressure using water vapor radiometer (WVR) data. Correcting for these combined effects still does not provide a good match to the residual position errors in the vertical direction. One hypothesis is that the current model of water vapor may be too simple to fully remove the day-to-day variations in the wet delay. We describe possible new avenues of improvement, which include recalibrating the baseline measurement datasets using the contemporaneous measurements of the water vapor scale height and temperature lapse rate from the oxygen sounder, and applying more accurate measurements of the sky coupling of the WVRs.
Baseline experiments in teleoperator control
NASA Technical Reports Server (NTRS)
Hankins, W. W., III; Mixon, R. W.
1986-01-01
Studies have been conducted at the NASA Langley Research Center (LaRC) to establish baseline human teleoperator interface data and to assess the influence of some of the interface parameters on human performance in teleoperation. As baseline data, the results will be used to assess future interface improvements resulting from this research in basic teleoperator human factors. In addition, the data have been used to validate LaRC's basic teleoperator hardware setup and to compare initial teleoperator study results. Four subjects controlled a modified industrial manipulator to perform a simple task involving both high and low precision. Two different schemes for controlling the manipulator were studied along with both direct and indirect viewing of the task. Performance of the task was measured as the length of time required to complete the task along with the number of errors made in the process. Analyses of variance were computed to determine the significance of the influences of each of the independent variables. Comparisons were also made between the LaRC data and data taken earlier by Grumman Aerospace Corp. at their facilities.
Yarber, William L; Milhausen, Robin R; Beavers, Karly A; Ryan, Rebecca; Sullivan, Margaret J; Vanterpool, Karen B; Sanders, Stephanie A; Graham, Cynthia A; Crosby, Richard A
2018-07-01
To conduct a pilot test of a brief, self-guided, home-based program designed to improve male condom use attitudes and behaviors among young women. Women aged 18-24 years from a large Midwestern University reporting having had penile-vaginal sex with two or more partners in the past 3 months. Sixty-seven enrolled; 91.0% completed the study. A repeated measures design was used, with assessments occurring at baseline, immediately post intervention (T2), and 30 days subsequent (T3). Condom use errors and problems decreased, condom-related attitudes and self-efficacy improved, and experiences of condom-protected sex were rated more positively when comparing baseline with T2 and T3 scores. Further, the proportion of condom-protected episodes more than doubled between T1 and T3 for those in the lowest quartile for condom use at baseline. This low-resource, home-based program improved condom-related attitudes and promoted the correct and consistent use of condoms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rioja, M.; Dodson, R., E-mail: maria.rioja@icrar.org
2011-04-15
We describe a new method which achieves high-precision very long baseline interferometry (VLBI) astrometry in observations at millimeter (mm) wavelengths. It combines fast frequency-switching observations, to correct for the dominant non-dispersive tropospheric fluctuations, with slow source-switching observations, for the remaining ionospheric dispersive terms. We call this method source-frequency phase referencing. Provided that the switching cycles match the properties of the propagation media, one can recover the source astrometry. We present an analytic description of the two-step calibration strategy, along with an error analysis to characterize its performance. Also, we provide observational demonstrations of a successful application with observations using themore » Very Long Baseline Array at 86 GHz of the pairs of sources 3C274 and 3C273 and 1308+326 and 1308+328 under various conditions. We conclude that this method is widely applicable to mm-VLBI observations of many target sources, and unique in providing bona fide astrometrically registered images and high-precision relative astrometric measurements in mm-VLBI using existing and newly built instruments, including space VLBI.« less
Multibaseline gravitational wave radiometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talukder, Dipongkar; Bose, Sukanta; Mitra, Sanjit
2011-03-15
We present a statistic for the detection of stochastic gravitational wave backgrounds (SGWBs) using radiometry with a network of multiple baselines. We also quantitatively compare the sensitivities of existing baselines and their network to SGWBs. We assess how the measurement accuracy of signal parameters, e.g., the sky position of a localized source, can improve when using a network of baselines, as compared to any of the single participating baselines. The search statistic itself is derived from the likelihood ratio of the cross correlation of the data across all possible baselines in a detector network and is optimal in Gaussian noise.more » Specifically, it is the likelihood ratio maximized over the strength of the SGWB and is called the maximized-likelihood ratio (MLR). One of the main advantages of using the MLR over past search strategies for inferring the presence or absence of a signal is that the former does not require the deconvolution of the cross correlation statistic. Therefore, it does not suffer from errors inherent to the deconvolution procedure and is especially useful for detecting weak sources. In the limit of a single baseline, it reduces to the detection statistic studied by Ballmer [Classical Quantum Gravity 23, S179 (2006).] and Mitra et al.[Phys. Rev. D 77, 042002 (2008).]. Unlike past studies, here the MLR statistic enables us to compare quantitatively the performances of a variety of baselines searching for a SGWB signal in (simulated) data. Although we use simulated noise and SGWB signals for making these comparisons, our method can be straightforwardly applied on real data.« less
NASA Astrophysics Data System (ADS)
Zhang, Guojian; Yu, Chengxin; Ding, Xinhua
2018-01-01
In this study, digital photography is used to monitor the instantaneous deformation of a masonry wall in seismic oscillation. In order to obtain higher measurement accuracy, the image matching-time baseline parallax method (IM-TBPM) is used to correct errors caused by the change of intrinsic and extrinsic parameters of digital cameras. Results show that the average errors of control point C5 are 0.79mm, 0.44mm and 0.96mm in X, Z and comprehensive direction, respectively. The average errors of control point C6 are 0.49mm, 0.44mm and 0.71mm in X, Z and comprehensive direction, respectively. These suggest that IM-TBPM can meet the accuracy requirements of instantaneous deformation monitoring. In seismic oscillation the middle to lower of the masonry wall develops cracks firstly. Then the shear failure occurs on the middle of masonry wall. This study provides technical basis for analyzing the crack development pattern of masonry structure in seismic oscillation and have significant implications for improved construction of masonry structures in earthquake prone areas.
NASA Astrophysics Data System (ADS)
GonzáLez, Pablo J.; FernáNdez, José
2011-10-01
Interferometric Synthetic Aperture Radar (InSAR) is a reliable technique for measuring crustal deformation. However, despite its long application in geophysical problems, its error estimation has been largely overlooked. Currently, the largest problem with InSAR is still the atmospheric propagation errors, which is why multitemporal interferometric techniques have been successfully developed using a series of interferograms. However, none of the standard multitemporal interferometric techniques, namely PS or SB (Persistent Scatterers and Small Baselines, respectively) provide an estimate of their precision. Here, we present a method to compute reliable estimates of the precision of the deformation time series. We implement it for the SB multitemporal interferometric technique (a favorable technique for natural terrains, the most usual target of geophysical applications). We describe the method that uses a properly weighted scheme that allows us to compute estimates for all interferogram pixels, enhanced by a Montecarlo resampling technique that properly propagates the interferogram errors (variance-covariances) into the unknown parameters (estimated errors for the displacements). We apply the multitemporal error estimation method to Lanzarote Island (Canary Islands), where no active magmatic activity has been reported in the last decades. We detect deformation around Timanfaya volcano (lengthening of line-of-sight ˜ subsidence), where the last eruption in 1730-1736 occurred. Deformation closely follows the surface temperature anomalies indicating that magma crystallization (cooling and contraction) of the 300-year shallow magmatic body under Timanfaya volcano is still ongoing.
Bjølseth, Tor Magne; Engedal, Knut; Benth, Jūratė Šaltytė; Dybedal, Gro Strømnes; Gaarden, Torfinn Lødøen; Tanum, Lars
2015-10-01
No prior study has investigated whether impairment of specific cognitive functions at baseline may predict the short-term treatment outcome of electroconvulsive therapy (ECT) in elderly non-demented patients with major depression (MD). This longitudinal cohort study included 65 elderly patients with unipolar or bipolar MD, aged 60-85 years, treated with formula-based ECT. Treatment outcome was assessed using the 17-item Hamilton Rating Scale for Depression (HRSD17). Cognitive function at baseline was assessed using nine neuropsychological tests or subtests measuring information processing speed, verbal learning and memory, and aspects of executive function. A poorer performance on the word reading task of the Color Word Interference Test rendered higher odds of achieving remission during the ECT course (p=0.021). Remission was defined as an HRSD17 score of 7 or less. There were no other significant associations between the treatment outcome of ECT and cognitive performance parameters assessed at baseline. The limited number of subjects may have reduced the generalizability of the findings. Multiple statistical tests increase the risk for making a type I error. How well patients perform on neuropsychological tests at baseline is most likely not a predictor of, or otherwise not significantly associated with the treatment outcome of formula-based ECT in elderly patients with MD. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Dravins, Dainis; Lagadec, Tiphaine; Nuñez, Paul D.
2015-08-01
Context. A long-held vision has been to realize diffraction-limited optical aperture synthesis over kilometer baselines. This will enable imaging of stellar surfaces and their environments, and reveal interacting gas flows in binary systems. An opportunity is now opening up with the large telescope arrays primarily erected for measuring Cherenkov light in air induced by gamma rays. With suitable software, such telescopes could be electronically connected and also used for intensity interferometry. Second-order spatial coherence of light is obtained by cross correlating intensity fluctuations measured in different pairs of telescopes. With no optical links between them, the error budget is set by the electronic time resolution of a few nanoseconds. Corresponding light-travel distances are approximately one meter, making the method practically immune to atmospheric turbulence or optical imperfections, permitting both very long baselines and observing at short optical wavelengths. Aims: Previous theoretical modeling has shown that full images should be possible to retrieve from observations with such telescope arrays. This project aims at verifying diffraction-limited imaging experimentally with groups of detached and independent optical telescopes. Methods: In a large optics laboratory, artificial stars (single and double, round and elliptic) were observed by an array of small telescopes. Using high-speed photon-counting solid-state detectors and real-time electronics, intensity fluctuations were cross-correlated over up to 180 baselines between pairs of telescopes, producing coherence maps across the interferometric Fourier-transform plane. Results: These interferometric measurements were used to extract parameters about the simulated stars, and to reconstruct their two-dimensional images. As far as we are aware, these are the first diffraction-limited images obtained from an optical array only linked by electronic software, with no optical connections between the telescopes. Conclusions: These experiments serve to verify the concepts for long-baseline aperture synthesis in the optical, somewhat analogous to radio interferometry.
Effect of single vision soft contact lenses on peripheral refraction.
Kang, Pauline; Fan, Yvonne; Oh, Kelly; Trac, Kevin; Zhang, Frank; Swarbrick, Helen
2012-07-01
To investigate changes in peripheral refraction with under-, full, and over-correction of central refraction with commercially available single vision soft contact lenses (SCLs) in young myopic adults. Thirty-four myopic adult subjects were fitted with Proclear Sphere SCLs to under-correct (+0.75 DS), fully correct, and over-correct (-0.75 DS) their manifest central refractive error. Central and peripheral refraction were measured with no lens wear and subsequently with different levels of SCL central refractive error correction. The uncorrected refractive error was myopic at all locations along the horizontal meridian. Peripheral refraction was relatively hyperopic compared to center at 30 and 35° in the temporal visual field (VF) in low myopes and at 30 and 35° in the temporal VF and 10, 30, and 35° in the nasal VF in moderate myopes. All levels of SCL correction caused a hyperopic shift in refraction at all locations in the horizontal VF. The smallest hyperopic shift was demonstrated with under-correction followed by full correction and then by over-correction of central refractive error. An increase in relative peripheral hyperopia was measured with full correction SCLs compared with no correction in both low and moderate myopes. However, no difference in relative peripheral refraction profiles were found between under-, full, and over-correction. Under-, full, and over-correction of central refractive error with single vision SCLs caused a hyperopic shift in both central and peripheral refraction at all positions in the horizontal meridian. All levels of SCL correction caused the peripheral retina, which initially experienced absolute myopic defocus at baseline with no correction, to experience absolute hyperopic defocus. This peripheral hyperopia may be a possible cause of myopia progression reported with different types and levels of myopia correction.
Spittal, Matthew J; Carlin, John B; Currier, Dianne; Downes, Marnie; English, Dallas R; Gordon, Ian; Pirkis, Jane; Gurrin, Lyle
2016-10-31
The Australian Longitudinal Study on Male Health (Ten to Men) used a complex sampling scheme to identify potential participants for the baseline survey. This raises important questions about when and how to adjust for the sampling design when analyzing data from the baseline survey. We describe the sampling scheme used in Ten to Men focusing on four important elements: stratification, multi-stage sampling, clustering and sample weights. We discuss how these elements fit together when using baseline data to estimate a population parameter (e.g., population mean or prevalence) or to estimate the association between an exposure and an outcome (e.g., an odds ratio). We illustrate this with examples using a continuous outcome (weight in kilograms) and a binary outcome (smoking status). Estimates of a population mean or disease prevalence using Ten to Men baseline data are influenced by the extent to which the sampling design is addressed in an analysis. Estimates of mean weight and smoking prevalence are larger in unweighted analyses than weighted analyses (e.g., mean = 83.9 kg vs. 81.4 kg; prevalence = 18.0 % vs. 16.7 %, for unweighted and weighted analyses respectively) and the standard error of the mean is 1.03 times larger in an analysis that acknowledges the hierarchical (clustered) structure of the data compared with one that does not. For smoking prevalence, the corresponding standard error is 1.07 times larger. Measures of association (mean group differences, odds ratios) are generally similar in unweighted or weighted analyses and whether or not adjustment is made for clustering. The extent to which the Ten to Men sampling design is accounted for in any analysis of the baseline data will depend on the research question. When the goals of the analysis are to estimate the prevalence of a disease or risk factor in the population or the magnitude of a population-level exposure-outcome association, our advice is to adopt an analysis that respects the sampling design.
Cognitive performance is associated with gray matter decline in first-episode psychosis.
Dempster, Kara; Norman, Ross; Théberge, Jean; Densmore, Maria; Schaefer, Betsy; Williamson, Peter
2017-06-30
Progressive loss of gray matter has been demonstrated over the early course of schizophrenia. Identification of an association between cognition and gray matter may lead to development of early interventions directed at preserving gray matter volume and cognitive ability. The present study evaluated the association between gray matter using voxel-based morphometry (VBM) and cognitive testing in a sample of 16 patients with first-episode psychosis. A simple regression was applied to investigate the association between gray matter at baseline and 80 months and cognitive tests at baseline. Performance on the Wisconsin Card Sorting Task (WCST) at baseline was positively associated with gray matter volume in several brain regions. There was an association between decreased gray matter at baseline in the nucleus accumbens and Trails B errors. Performing worse on Trails B and making more WCST perseverative errors at baseline was associated with gray matter decline over 80 months in the right globus pallidus, left inferior parietal lobe, Brodmann's area (BA) 40, and left superior parietal lobule and BA 7 respectively. All significant findings were cluster corrected. The results support a relationship between aspects of cognitive impairment and gray matter abnormalities in first-episode psychosis. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Online measurement of urea concentration in spent dialysate during hemodialysis.
Olesberg, Jonathon T; Arnold, Mark A; Flanigan, Michael J
2004-01-01
We describe online optical measurements of urea in the effluent dialysate line during regular hemodialysis treatment of several patients. Monitoring urea removal can provide valuable information about dialysis efficiency. Spectral measurements were performed with a Fourier-transform infrared spectrometer equipped with a flow-through cell. Spectra were recorded across the 5000-4000 cm(-1) (2.0-2.5 microm) wavelength range at 1-min intervals. Savitzky-Golay filtering was used to remove baseline variations attributable to the temperature dependence of the water absorption spectrum. Urea concentrations were extracted from the filtered spectra by use of partial least-squares regression and the net analyte signal of urea. Urea concentrations predicted by partial least-squares regression matched concentrations obtained from standard chemical assays with a root mean square error of 0.30 mmol/L (0.84 mg/dL urea nitrogen) over an observed concentration range of 0-11 mmol/L. The root mean square error obtained with the net analyte signal of urea was 0.43 mmol/L with a calibration based only on a set of pure-component spectra. The error decreased to 0.23 mmol/L when a slope and offset correction were used. Urea concentrations can be continuously monitored during hemodialysis by near-infrared spectroscopy. Calibrations based on the net analyte signal of urea are particularly appealing because they do not require a training step, as do statistical multivariate calibration procedures such as partial least-squares regression.
NASA Astrophysics Data System (ADS)
Gehlot, Bharat K.; Koopmans, Léon V. E.
2018-05-01
Contamination due to foregrounds, calibration errors and ionospheric effects pose major challenges in detection of the cosmic 21 cm signal in various Epoch of Reionization (EoR) experiments. We present the results of a study of a field centered on 3C196 using LOFAR Low Band observations, where we quantify various wide field and calibration effects such as gain errors, polarized foregrounds, and ionospheric effects. We observe a `pitchfork' structure in the power spectrum of the polarized intensity in delay-baseline space, which leaks into the modes beyond the instrumental horizon. We show that this structure arises due to strong instrumental polarization leakage (~30%) towards Cas A which is far away from primary field of view. We measure a small ionospheric diffractive scale towards CasA resembling pure Kolmogorov turbulence. Our work provides insights in understanding the nature of aforementioned effects and mitigating them in future Cosmic Dawn observations.
SITE project. Phase 1: Continuous data bit-error-rate testing
NASA Technical Reports Server (NTRS)
Fujikawa, Gene; Kerczewski, Robert J.
1992-01-01
The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.
Regression dilution in the proportional hazards model.
Hughes, M D
1993-12-01
The problem of regression dilution arising from covariate measurement error is investigated for survival data using the proportional hazards model. The naive approach to parameter estimation is considered whereby observed covariate values are used, inappropriately, in the usual analysis instead of the underlying covariate values. A relationship between the estimated parameter in large samples and the true parameter is obtained showing that the bias does not depend on the form of the baseline hazard function when the errors are normally distributed. With high censorship, adjustment of the naive estimate by the factor 1 + lambda, where lambda is the ratio of within-person variability about an underlying mean level to the variability of these levels in the population sampled, removes the bias. As censorship increases, the adjustment required increases and when there is no censorship is markedly higher than 1 + lambda and depends also on the true risk relationship.
NASA Technical Reports Server (NTRS)
Ong, K. M.; Macdoran, P. F.; Thomas, J. B.; Fliegel, H. F.; Skjerve, L. J.; Spitzmesser, D. J.; Batelaan, P. D.; Paine, S. R.; Newsted, M. G.
1976-01-01
A precision geodetic measurement system (Aries, for Astronomical Radio Interferometric Earth Surveying) based on the technique of very long base line interferometry has been designed and implemented through the use of a 9-m transportable antenna and the NASA 64-m antenna of the Deep Space Communications Complex at Goldstone, California. A series of experiments designed to demonstrate the inherent accuracy of a transportable interferometer was performed on a 307-m base line during the period from December 1973 to June 1974. This short base line was chosen in order to obtain a comparison with a conventional survey with a few-centimeter accuracy and to minimize Aries errors due to transmission media effects, source locations, and earth orientation parameters. The base-line vector derived from a weighted average of the measurements, representing approximately 24 h of data, possessed a formal uncertainty of about 3 cm in all components. This average interferometry base-line vector was in good agreement with the conventional survey vector within the statistical range allowed by the combined uncertainties (3-4 cm) of the two techniques.
van de Plas, Afke; Slikkerveer, Mariëlle; Hoen, Saskia; Schrijnemakers, Rick; Driessen, Johanna; de Vries, Frank; van den Bemt, Patricia
2017-01-01
In this controlled before-after study the effect of improvements, derived from Lean Six Sigma strategy, on parenteral medication administration errors and the potential risk of harm was determined. During baseline measurement, on control versus intervention ward, at least one administration error occurred in 14 (74%) and 6 (46%) administrations with potential risk of harm in 6 (32%) and 1 (8%) administrations. Most administration errors with high potential risk of harm occurred in bolus injections: 8 (57%) versus 2 (67%) bolus injections were injected too fast with a potential risk of harm in 6 (43%) and 1 (33%) bolus injections on control and intervention ward. Implemented improvement strategies, based on major causes of too fast administration of bolus injections, were: Substitution of bolus injections by infusions, education, availability of administration information and drug round tabards. Post intervention, on the control ward in 76 (76%) administrations at least one error was made (RR 1.03; CI95:0.77-1.38), with a potential risk of harm in 14 (14%) administrations (RR 0.45; CI95:0.20-1.02). In 40 (68%) administrations on the intervention ward at least one error occurred (RR 1.47; CI95:0.80-2.71) but no administrations were associated with a potential risk of harm. A shift in wrong duration administration errors from bolus injections to infusions, with a reduction of potential risk of harm, seems to have occurred on the intervention ward. Although data are insufficient to prove an effect, Lean Six Sigma was experienced as a suitable strategy to select tailored improvements. Further studies are required to prove the effect of the strategy on parenteral medication administration errors.
van de Plas, Afke; Slikkerveer, Mariëlle; Hoen, Saskia; Schrijnemakers, Rick; Driessen, Johanna; de Vries, Frank; van den Bemt, Patricia
2017-01-01
In this controlled before-after study the effect of improvements, derived from Lean Six Sigma strategy, on parenteral medication administration errors and the potential risk of harm was determined. During baseline measurement, on control versus intervention ward, at least one administration error occurred in 14 (74%) and 6 (46%) administrations with potential risk of harm in 6 (32%) and 1 (8%) administrations. Most administration errors with high potential risk of harm occurred in bolus injections: 8 (57%) versus 2 (67%) bolus injections were injected too fast with a potential risk of harm in 6 (43%) and 1 (33%) bolus injections on control and intervention ward. Implemented improvement strategies, based on major causes of too fast administration of bolus injections, were: Substitution of bolus injections by infusions, education, availability of administration information and drug round tabards. Post intervention, on the control ward in 76 (76%) administrations at least one error was made (RR 1.03; CI95:0.77-1.38), with a potential risk of harm in 14 (14%) administrations (RR 0.45; CI95:0.20-1.02). In 40 (68%) administrations on the intervention ward at least one error occurred (RR 1.47; CI95:0.80-2.71) but no administrations were associated with a potential risk of harm. A shift in wrong duration administration errors from bolus injections to infusions, with a reduction of potential risk of harm, seems to have occurred on the intervention ward. Although data are insufficient to prove an effect, Lean Six Sigma was experienced as a suitable strategy to select tailored improvements. Further studies are required to prove the effect of the strategy on parenteral medication administration errors. PMID:28674608
Baseline Establishment Using Virtual Environment Traumatic Brain Injury Screen (VETS)
2015-06-01
indicator of mTBI. Further, these results establish a baseline data set, which may be useful in comparing concussed individuals. 14. SUBJECT TERMS... Concussion , mild traumatic brain injury (mTBI), traumatic brain injury (TBI), balance, Sensory Organization Test, Balance Error Scoring System, center of...43 5.2 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . 44 Appendix A Military Acute Concussion Evaluation 47
Applying the intention-to-treat principle in practice: Guidance on handling randomisation errors
Sullivan, Thomas R; Voysey, Merryn; Lee, Katherine J; Cook, Jonathan A; Forbes, Andrew B
2015-01-01
Background: The intention-to-treat principle states that all randomised participants should be analysed in their randomised group. The implications of this principle are widely discussed in relation to the analysis, but have received limited attention in the context of handling errors that occur during the randomisation process. The aims of this article are to (1) demonstrate the potential pitfalls of attempting to correct randomisation errors and (2) provide guidance on handling common randomisation errors when they are discovered that maintains the goals of the intention-to-treat principle. Methods: The potential pitfalls of attempting to correct randomisation errors are demonstrated and guidance on handling common errors is provided, using examples from our own experiences. Results: We illustrate the problems that can occur when attempts are made to correct randomisation errors and argue that documenting, rather than correcting these errors, is most consistent with the intention-to-treat principle. When a participant is randomised using incorrect baseline information, we recommend accepting the randomisation but recording the correct baseline data. If ineligible participants are inadvertently randomised, we advocate keeping them in the trial and collecting all relevant data but seeking clinical input to determine their appropriate course of management, unless they can be excluded in an objective and unbiased manner. When multiple randomisations are performed in error for the same participant, we suggest retaining the initial randomisation and either disregarding the second randomisation if only one set of data will be obtained for the participant, or retaining the second randomisation otherwise. When participants are issued the incorrect treatment at the time of randomisation, we propose documenting the treatment received and seeking clinical input regarding the ongoing treatment of the participant. Conclusion: Randomisation errors are almost inevitable and should be reported in trial publications. The intention-to-treat principle is useful for guiding responses to randomisation errors when they are discovered. PMID:26033877
Applying the intention-to-treat principle in practice: Guidance on handling randomisation errors.
Yelland, Lisa N; Sullivan, Thomas R; Voysey, Merryn; Lee, Katherine J; Cook, Jonathan A; Forbes, Andrew B
2015-08-01
The intention-to-treat principle states that all randomised participants should be analysed in their randomised group. The implications of this principle are widely discussed in relation to the analysis, but have received limited attention in the context of handling errors that occur during the randomisation process. The aims of this article are to (1) demonstrate the potential pitfalls of attempting to correct randomisation errors and (2) provide guidance on handling common randomisation errors when they are discovered that maintains the goals of the intention-to-treat principle. The potential pitfalls of attempting to correct randomisation errors are demonstrated and guidance on handling common errors is provided, using examples from our own experiences. We illustrate the problems that can occur when attempts are made to correct randomisation errors and argue that documenting, rather than correcting these errors, is most consistent with the intention-to-treat principle. When a participant is randomised using incorrect baseline information, we recommend accepting the randomisation but recording the correct baseline data. If ineligible participants are inadvertently randomised, we advocate keeping them in the trial and collecting all relevant data but seeking clinical input to determine their appropriate course of management, unless they can be excluded in an objective and unbiased manner. When multiple randomisations are performed in error for the same participant, we suggest retaining the initial randomisation and either disregarding the second randomisation if only one set of data will be obtained for the participant, or retaining the second randomisation otherwise. When participants are issued the incorrect treatment at the time of randomisation, we propose documenting the treatment received and seeking clinical input regarding the ongoing treatment of the participant. Randomisation errors are almost inevitable and should be reported in trial publications. The intention-to-treat principle is useful for guiding responses to randomisation errors when they are discovered. © The Author(s) 2015.
Spatio-temporal representativeness of ground-based downward solar radiation measurements
NASA Astrophysics Data System (ADS)
Schwarz, Matthias; Wild, Martin; Folini, Doris
2017-04-01
Surface solar radiation (SSR) is most directly observed with ground based pyranometer measurements. Besides measurement uncertainties, which arise from the pyranometer instrument itself, also errors attributed to the limited spatial representativeness of observations from single sites for their large-scale surrounding have to be taken into account when using such measurements for energy balance studies. In this study the spatial representativeness of 157 homogeneous European downward surface solar radiation time series from the Global Energy Balance Archive (GEBA) and the Baseline Surface Radiation Network (BSRN) were examined for the period 1983-2015 by using the high resolution (0.05°) surface solar radiation data set from the Satellite Application Facility on Climate Monitoring (CM-SAF SARAH) as a proxy for the spatiotemporal variability of SSR. By correlating deseasonalized monthly SSR time series form surface observations against single collocated satellite derived SSR time series, a mean spatial correlation pattern was calculated and validated against purely observational based patterns. Generally decreasing correlations with increasing distance from station, with high correlations (R2 = 0.7) in proximity to the observational sites (±0.5°), was found. When correlating surface observations against time series from spatially averaged satellite derived SSR data (and thereby simulating coarser and coarser grids), very high correspondence between sites and the collocated pixels has been found for pixel sizes up to several degrees. Moreover, special focus was put on the quantification of errors which arise in conjunction to spatial sampling when estimating the temporal variability and trends for a larger region from a single surface observation site. For 15-year trends on a 1° grid, errors due to spatial sampling in the order of half of the measurement uncertainty for monthly mean values were found.
The Effect of Adenotonsillectomy for Childhood Sleep Apnea on Cardiometabolic Measures
Quante, Mirja; Wang, Rui; Weng, Jia; Rosen, Carol L.; Amin, Raouf; Garetz, Susan L.; Katz, Eliot; Paruthi, Shalini; Arens, Raanan; Muzumdar, Hiren; Marcus, Carole L.; Ellenberg, Susan; Redline, Susan
2015-01-01
Study Objectives: Obstructive sleep apnea syndrome (OSAS) has been associated with cardiometabolic disease in adults. In children, this association is unclear. We evaluated the effect of early adenotonsillectomy (eAT) for treatment of OSAS on blood pressure, heart rate, lipids, glucose, insulin, and C-reactive protein. We also analyzed whether these parameters at baseline and changes at follow-up correlated with polysomnographic indices. Design: Data collected at baseline and 7-mo follow-up were analyzed from a randomized controlled trial, the Childhood Adenotonsillectomy Trial (CHAT). Setting: Clinical referral setting from multiple centers. Participants: There were 464 children, ages 5 to 9.9 y with OSAS without severe hypoxemia. Interventions: Randomization to eAT or Watchful Waiting with Supportive Care (WWSC). Measurements and Results: There was no significant change of cardiometabolic parameters over the 7-mo interval in the eAT group compared to WWSC group. However, overnight heart rate was incrementally higher in association with baseline OSAS severity (average heart rate increase of 3 beats per minute [bpm] for apnea-hypopnea index [AHI] of 2 versus 10; [standard error = 0.60]). Each 5-unit improvement in AHI and 5 mmHg improvement in peak end-tidal CO2 were estimated to reduce heart rate by 1 and 1.5 bpm, respectively. An increase in N3 sleep also was associated with small reductions in systolic blood pressure percentile. Conclusions: There is little variation in standard cardiometabolic parameters in children with obstructive sleep apnea syndrome (OSAS) but without severe hypoxemia at baseline or after intervention. Of all measures, overnight heart rate emerged as the most sensitive parameter of pediatric OSAS severity. Clinical Trial Registration: Clinicaltrials.gov (#NCT00560859) Citation: Quante M, Wang R, Weng J, Rosen CL, Amin R, Garetz SL, Katz E, Paruthi S, Arens R, Muzumdar H, Marcus CL, Ellenberg S, Redline S. The effect of adenotonsillectomy for childhood sleep apnea on cardiometabolic measures. SLEEP 2015;38(9):1395–1403. PMID:25669177
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brian Leen, J.; Berman, Elena S. F.; Gupta, Manish
Developments in cavity-enhanced absorption spectrometry have made it possible to measure water isotopes using faster, more cost-effective field-deployable instrumentation. Several groups have attempted to extend this technology to measure water extracted from plants and found that other extracted organics absorb light at frequencies similar to that absorbed by the water isotopomers, leading to {delta}{sup 2}H and {delta}{sup 18}O measurement errors ({Delta}{delta}{sup 2}H and {Delta}{delta}{sup 18}O). In this note, the off-axis integrated cavity output spectroscopy (ICOS) spectra of stable isotopes in liquid water is analyzed to determine the presence of interfering absorbers that lead to erroneous isotope measurements. The baseline offsetmore » of the spectra is used to calculate a broadband spectral metric, m{sub BB}, and the mean subtracted fit residuals in two regions of interest are used to determine a narrowband metric, m{sub NB}. These metrics are used to correct for {Delta}{delta}{sup 2}H and {Delta}{delta}{sup 18}O. The method was tested on 14 instruments and {Delta}{delta}{sup 18}O was found to scale linearly with contaminant concentration for both narrowband (e.g., methanol) and broadband (e.g., ethanol) absorbers, while {Delta}{delta}{sup 2}H scaled linearly with narrowband and as a polynomial with broadband absorbers. Additionally, the isotope errors scaled logarithmically with m{sub NB}. Using the isotope error versus m{sub NB} and m{sub BB} curves, {Delta}{delta}{sup 2}H and {Delta}{delta}{sup 18}O resulting from methanol contamination were corrected to a maximum mean absolute error of 0.93 per mille and 0.25 per mille respectively, while {Delta}{delta}{sup 2}H and {Delta}{delta}{sup 18}O from ethanol contamination were corrected to a maximum mean absolute error of 1.22 per mille and 0.22 per mille . Large variation between instruments indicates that the sensitivities must be calibrated for each individual isotope analyzer. These results suggest that the properly calibrated interference metrics can be used to correct for polluted samples and extend off-axis ICOS measurements of liquid water to include plant waters, soil extracts, wastewater, and alcoholic beverages. The general technique may also be extended to other laser-based analyzers including methane and carbon dioxide isotope sensors.« less
NASA Astrophysics Data System (ADS)
Appleby, Graham; Rodríguez, José; Altamimi, Zuheir
2016-12-01
Satellite laser ranging (SLR) to the geodetic satellites LAGEOS and LAGEOS-2 uniquely determines the origin of the terrestrial reference frame and, jointly with very long baseline interferometry, its scale. Given such a fundamental role in satellite geodesy, it is crucial that any systematic errors in either technique are at an absolute minimum as efforts continue to realise the reference frame at millimetre levels of accuracy to meet the present and future science requirements. Here, we examine the intrinsic accuracy of SLR measurements made by tracking stations of the International Laser Ranging Service using normal point observations of the two LAGEOS satellites in the period 1993 to 2014. The approach we investigate in this paper is to compute weekly reference frame solutions solving for satellite initial state vectors, station coordinates and daily Earth orientation parameters, estimating along with these weekly average range errors for each and every one of the observing stations. Potential issues in any of the large number of SLR stations assumed to have been free of error in previous realisations of the ITRF may have been absorbed in the reference frame, primarily in station height. Likewise, systematic range errors estimated against a fixed frame that may itself suffer from accuracy issues will absorb network-wide problems into station-specific results. Our results suggest that in the past two decades, the scale of the ITRF derived from the SLR technique has been close to 0.7 ppb too small, due to systematic errors either or both in the range measurements and their treatment. We discuss these results in the context of preparations for ITRF2014 and additionally consider the impact of this work on the currently adopted value of the geocentric gravitational constant, GM.
Serum vitamin D levels are not altered after controlled diesel ...
Past research has suggested that exposure to urban air pollution may be associated with vitamin D deficiency in human populations. Vitamin D is widely known for its importance in bone growth/remodeling, muscle metabolism, and its ability to promote calcium absorption in the gut; deficiency in vitamin D results in the development of rickets in children and osteomalacia in adults. In the current study, we assessed whether vitamin D levels are altered under controlled exposures to a commonly measured urban air pollutant, diesel. For this study, we exposed 12 healthy volunteers to clean air and diesel exhaust (300 μg/m3) for 2 hours while undergoing intermittent exercise. Venous blood was collected before, 0 hrs post-, and 18 hrs post-exposure, and 25-hydroxyvitamin D [25(OH)D] was measured in the serum. The average baseline value of 25(OH)D (mean ± standard error) was 22.9 ± 2.5 ng/mL. Four subject’s baseline values were vitamin D deficient (30 ng/mL). Additionally, there was no significant change in the baseline values between the clean air and diesel exposures (paired t-test, p = 0.54), suggesting minimal variability in 25(OH)D over the experiment's time course. Small inductions in 25(OH)D were found following clean air exposures (12.5 ± 4.9% and a 7.1 ± 5.0% for 0 hrs post- and 18 hrs post-exposure values compared to baseline, respectively). Minimal changes in 25(OH)D were observed following diesel exhaust exposures 0 hrs (3.5 ± 5.2%) and 18 hrs followin
NASA Technical Reports Server (NTRS)
Shapiro, I. I.; Counselman, C. C., III
1975-01-01
The uses of radar observations of planets and very-long-baseline radio interferometric observations of extragalactic objects to test theories of gravitation are described in detail with special emphasis on sources of error. The accuracy achievable in these tests with data already obtained, can be summarized in terms of: retardation of signal propagation (radar), deflection of radio waves (interferometry), advance of planetary perihelia (radar), gravitational quadrupole moment of sun (radar), and time variation of gravitational constant (radar). The analyses completed to date have yielded no significant disagreement with the predictions of general relativity.
Goya, Thiago T.; Silva, Rosyvaldo F.; Guerra, Renan S.; Lima, Marta F.; Barbosa, Eline R.F.; Cunha, Paulo Jannuzzi; Lobo, Denise M.L.; Buchpiguel, Carlos A.; Busatto-Filho, Geraldo; Negrão, Carlos E.; Lorenzi-Filho, Geraldo; Ueno-Pardi, Linda M.
2016-01-01
Study Objectives: To investigate muscle sympathetic nerve activity (MSNA) response and executive performance during mental stress in obstructive sleep apnea (OSA). Methods: Individuals with no other comorbidities (age = 52 ± 1 y, body mass index = 29 ± 0.4, kg/m2) were divided into two groups: (1) control (n = 15) and (2) untreated OSA (n = 20) defined by polysomnography. Mini-Mental State of Examination (MMSE) and Inteligence quocient (IQ) were assessed. Heart rate (HR), blood pressure (BP), and MSNA (microneurography) were measured at baseline and during 3 min of the Stroop Color Word Test (SCWT). Sustained attention and inhibitory control were assessed by the number of correct answers and errors during SCWT. Results: Control and OSA groups (apnea-hypopnea index, AHI = 8 ± 1 and 47 ± 1 events/h, respectively) were similar in age, MMSE, and IQ. Baseline HR and BP were similar and increased similarly during SCWT in control and OSA groups. In contrast, baseline MSNA was higher in OSA compared to controls. Moreover, MSNA significantly increased in the third minute of SCWT in OSA, but remained unchanged in controls (P < 0.05). The number of correct answers was lower and the number of errors was significantly higher during the second and third minutes of SCWT in the OSA group (P < 0.05). There was a significant correlation (P < 0.01) between the number of errors in the third minute of SCWT with AHI (r = 0.59), arousal index (r = 0.55), and minimum O2 saturation (r = −0.57). Conclusions: As compared to controls, MSNA is increased in patients with OSA at rest, and further significant MSNA increments and worse executive performance are seen during mental stress. Clinical Trial Registration: URL: http://www.clinicaltrials.gov, registration number: NCT002289625. Citation: Goya TT, Silva RF, Guerra RS, Lima MF, Barbosa ER, Cunha PJ, Lobo DM, Buchpiguel CA, Busatto-Filho G, Negrão CE, Lorenzi-Filho G, Ueno-Pardi LM. Increased muscle sympathetic nerve activity and impaired executive performance capacity in obstructive sleep apnea. SLEEP 2016;39(1):25–33. PMID:26237773
Dopamine reward prediction error coding.
Schultz, Wolfram
2016-03-01
Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.
Dopamine reward prediction error coding
Schultz, Wolfram
2016-01-01
Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377
NASA Astrophysics Data System (ADS)
Bertin, Stephane; Friedrich, Heide; Delmas, Patrice; Chan, Edwin; Gimel'farb, Georgy
2015-03-01
Grain-scale monitoring of fluvial morphology is important for the evaluation of river system dynamics. Significant progress in remote sensing and computer performance allows rapid high-resolution data acquisition, however, applications in fluvial environments remain challenging. Even in a controlled environment, such as a laboratory, the extensive acquisition workflow is prone to the propagation of errors in digital elevation models (DEMs). This is valid for both of the common surface recording techniques: digital stereo photogrammetry and terrestrial laser scanning (TLS). The optimisation of the acquisition process, an effective way to reduce the occurrence of errors, is generally limited by the use of commercial software. Therefore, the removal of evident blunders during post processing is regarded as standard practice, although this may introduce new errors. This paper presents a detailed evaluation of a digital stereo-photogrammetric workflow developed for fluvial hydraulic applications. The introduced workflow is user-friendly and can be adapted to various close-range measurements: imagery is acquired with two Nikon D5100 cameras and processed using non-proprietary "on-the-job" calibration and dense scanline-based stereo matching algorithms. Novel ground truth evaluation studies were designed to identify the DEM errors, which resulted from a combination of calibration errors, inaccurate image rectifications and stereo-matching errors. To ensure optimum DEM quality, we show that systematic DEM errors must be minimised by ensuring a good distribution of control points throughout the image format during calibration. DEM quality is then largely dependent on the imagery utilised. We evaluated the open access multi-scale Retinex algorithm to facilitate the stereo matching, and quantified its influence on DEM quality. Occlusions, inherent to any roughness element, are still a major limiting factor to DEM accuracy. We show that a careful selection of the camera-to-object and baseline distance reduces errors in occluded areas and that realistic ground truths help to quantify those errors.
NASA Technical Reports Server (NTRS)
Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.
2005-01-01
This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.
Tropospheric delay ray tracing applied in VLBI analysis
NASA Astrophysics Data System (ADS)
Eriksson, David; MacMillan, D. S.; Gipson, John M.
2014-12-01
Tropospheric delay modeling error continues to be one of the largest sources of error in VLBI (very long baseline interferometry) analysis. For standard operational solutions, we use the VMF1 elevation-dependent mapping functions derived from European Centre for Medium-Range Weather Forecasts data. These mapping functions assume that tropospheric delay at a site is azimuthally symmetric. As this assumption is not true, we have instead determined the ray trace delay along the signal path through the troposphere for each VLBI quasar observation. We determined the troposphere refractivity fields from the pressure, temperature, specific humidity, and geopotential height fields of the NASA Goddard Space Flight Center Goddard Earth Observing System version 5 numerical weather model. When applied in VLBI analysis, baseline length repeatabilities were improved compared with using the VMF1 mapping function model for 72% of the baselines and site vertical repeatabilities were better for 11 of 13 sites during the 2 week CONT11 observing period in September 2011. When applied to a larger data set (2011-2013), we see a similar improvement in baseline length and also in site position repeatabilities for about two thirds of the stations in each of the site topocentric components.
Yamada, Akira; Mohri, Satoshi; Nakamura, Michihiro; Naruse, Keiji
2015-01-01
The liquid junction potential (LJP), the phenomenon that occurs when two electrolyte solutions of different composition come into contact, prevents accurate measurements in potentiometry. The effect of the LJP is usually remarkable in measurements of diluted solutions with low buffering capacities or low ion concentrations. Our group has constructed a simple method to eliminate the LJP by exerting spatiotemporal control of a liquid junction (LJ) formed between two solutions, a sample solution and a baseline solution (BLS), in a flow-through-type differential pH sensor probe. The method was contrived based on microfluidics. The sensor probe is a differential measurement system composed of two ion-sensitive field-effect transistors (ISFETs) and one Ag/AgCl electrode. With our new method, the border region of the sample solution and BLS is vibrated in order to mix solutions and suppress the overshoot after the sample solution is suctioned into the sensor probe. Compared to the conventional method without vibration, our method shortened the settling time from over two min to 15 s and reduced the measurement error by 86% to within 0.060 pH. This new method will be useful for improving the response characteristics and decreasing the measurement error of many apparatuses that use LJs. PMID:25835300
Estimating Ocean Currents from Automatic Identification System Based Ship Drift Measurements
NASA Astrophysics Data System (ADS)
Jakub, Thomas D.
Ship drift is a technique that has been used over the last century and a half to estimate ocean currents. Several of the shortcomings of the ship drift technique include obtaining the data from multiple ships, the time delay in getting those ship positions to a data center for processing and the limited resolution based on the amount of time between position measurements. These shortcomings can be overcome through the use of the Automatic Identification System (AIS). AIS enables more precise ocean current estimates, the option of finer resolution and more timely estimates. In this work, a demonstration of the use of AIS to compute ocean currents is performed. A corresponding error and sensitivity analysis is performed to help identify under which conditions errors will be smaller. A case study in San Francisco Bay with constant AIS message updates was compared against high frequency radar and demonstrated ocean current magnitude residuals of 19 cm/s for ship tracks in a high signal to noise environment. These ship tracks were only minutes long compared to the normally 12 to 24 hour ship tracks. The Gulf of Mexico case study demonstrated the ability to estimate ocean currents over longer baselines and identified the dependency of the estimates on the accuracy of time measurements. Ultimately, AIS measurements when combined with ship drift can provide another method of estimating ocean currents, particularly when other measurements techniques are not available.
Some tests of wet tropospheric calibration for the CASA Uno Global Positioning System experiment
NASA Technical Reports Server (NTRS)
Dixon, T. H.; Wolf, S. Kornreich
1990-01-01
Wet tropospheric path delay can be a major error source for Global Positioning System (GPS) geodetic experiments. Strategies for minimizing this error are investigted using data from CASA Uno, the first major GPS experiment in Central and South America, where wet path delays may be both high and variable. Wet path delay calibration using water vapor radiometers (WVRs) and residual delay estimation is compared with strategies where the entire wet path delay is estimated stochastically without prior calibration, using data from a 270-km test baseline in Costa Rica. Both approaches yield centimeter-level baseline repeatability and similar tropospheric estimates, suggesting that WVR calibration is not critical for obtaining high precision results with GPS in the CASA region.
Is the deleterious effect of cryotherapy on proprioception mitigated by exercise?
Ribeiro, F; Moreira, S; Neto, J; Oliveira, J
2013-05-01
This study aimed to examine the acute effects of cryotherapy on knee position sense and to determine the time period necessary to normalize joint position sense when exercising after cryotherapy. 12 subjects visited the laboratory twice, once for cryotherapy followed by 30 min of exercise on a cycloergometer and once for cryotherapy followed by 30 min of rest. Sessions were randomly determined and separated by 48 h. Cryotherapy was applied in the form of ice bag, filled with 1 kg of crushed ice, for 20 min. Knee position sense was measured at baseline, after cryotherapy and every 5 min after cryotherapy removal until a total of 30 min. The main effect of cryotherapy was significant showing an increase in absolute (F7,154=43.76, p<0.001) and relative (F7,154=7.97, p<0.001) errors after cryotherapy. The intervention after cryotherapy (rest vs. exercise) revealed a significant main effect only for absolute error (F7,154=4.05, p<0.001), i.e., when subjects exercised after cryotherapy, the proprioceptive acuity reached the baseline values faster (10 min vs. 15 min). Our results indicated that the deleterious effect of cryotherapy on proprioception is mitigated by low intensity exercise, being the time necessary to normalize knee position sense reduced from 15 to 10 min. © Georg Thieme Verlag KG Stuttgart · New York.
Myopia, contact lens use and self-esteem.
Dias, Lynette; Manny, Ruth E; Weissberg, Erik; Fern, Karen D
2013-09-01
To evaluate whether contact lens (CL) use was associated with self-esteem in myopic children originally enrolled in the Correction of Myopia Evaluation Trial (COMET), that after 5 years continued as an observational study of myopia progression with CL use permitted. Usable data at the 6-year visit, one year after CL use was allowed (n = 423/469, age 12-17 years), included questions on CL use, refractive error measurements and self-reported self-esteem in several areas (scholastic/athletic competence, physical appearance, social acceptance, behavioural conduct and global self-worth). Self-esteem, scored from 1 (low) to 4 (high), was measured by the Self-Perception Profile for Children in participants under 14 years or the Self-Perception Profile for Adolescents, in those 14 years and older. Multiple regression analyses were used to evaluate associations between self-esteem and relevant factors identified by univariate analyses (e.g., CL use, gender, ethnicity), while adjusting for baseline self-esteem prior to CL use. Mean (±S.D.) self-esteem scores at the 6-year visit (mean age = 15.3 ± 1.3 years; mean refractive error = -4.6 ± 1.5 D) ranged from 2.74 (± 0.76) on athletic competence to 3.33 (± 0.53) on global self-worth. CL wearers (n = 224) compared to eyeglass wearers (n = 199) were more likely to be female (p < 0.0001). Those who chose to wear CLs had higher social acceptance, athletic competence and behavioural conduct scores (p < 0.05) at baseline compared to eyeglass users. CL users continued to report higher social acceptance scores at the 6-year visit (p = 0.03), after adjusting for baseline scores and other covariates. Ethnicity was also independently associated with social acceptance in the multivariable analyses (p = 0.011); African-Americans had higher scores than Asians, Whites and Hispanics. Age and refractive error were not associated with self-esteem or CL use. COMET participants who chose to wear CLs after 5 years of eyeglass use had higher self-esteem compared to those who remained in glasses both preceding and following CL use. This suggests that self-esteem may influence the decision to wear CLs and that CLs in turn are associated with higher self-esteem in individuals most likely to wear them. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
Two-UAV Intersection Localization System Based on the Airborne Optoelectronic Platform
Bai, Guanbing; Liu, Jinghong; Song, Yueming; Zuo, Yujia
2017-01-01
To address the limitation of the existing UAV (unmanned aerial vehicles) photoelectric localization method used for moving objects, this paper proposes an improved two-UAV intersection localization system based on airborne optoelectronic platforms by using the crossed-angle localization method of photoelectric theodolites for reference. This paper introduces the makeup and operating principle of intersection localization system, creates auxiliary coordinate systems, transforms the LOS (line of sight, from the UAV to the target) vectors into homogeneous coordinates, and establishes a two-UAV intersection localization model. In this paper, the influence of the positional relationship between UAVs and the target on localization accuracy has been studied in detail to obtain an ideal measuring position and the optimal localization position where the optimal intersection angle is 72.6318°. The result shows that, given the optimal position, the localization root mean square error (RMS) will be 25.0235 m when the target is 5 km away from UAV baselines. Finally, the influence of modified adaptive Kalman filtering on localization results is analyzed, and an appropriate filtering model is established to reduce the localization RMS error to 15.7983 m. Finally, An outfield experiment was carried out and obtained the optimal results: σB=1.63×10−4 (°), σL=1.35×10−4 (°), σH=15.8 (m), σsum=27.6 (m), where σB represents the longitude error, σL represents the latitude error, σH represents the altitude error, and σsum represents the error radius. PMID:28067814
Two-UAV Intersection Localization System Based on the Airborne Optoelectronic Platform.
Bai, Guanbing; Liu, Jinghong; Song, Yueming; Zuo, Yujia
2017-01-06
To address the limitation of the existing UAV (unmanned aerial vehicles) photoelectric localization method used for moving objects, this paper proposes an improved two-UAV intersection localization system based on airborne optoelectronic platforms by using the crossed-angle localization method of photoelectric theodolites for reference. This paper introduces the makeup and operating principle of intersection localization system, creates auxiliary coordinate systems, transforms the LOS (line of sight, from the UAV to the target) vectors into homogeneous coordinates, and establishes a two-UAV intersection localization model. In this paper, the influence of the positional relationship between UAVs and the target on localization accuracy has been studied in detail to obtain an ideal measuring position and the optimal localization position where the optimal intersection angle is 72.6318°. The result shows that, given the optimal position, the localization root mean square error (RMS) will be 25.0235 m when the target is 5 km away from UAV baselines. Finally, the influence of modified adaptive Kalman filtering on localization results is analyzed, and an appropriate filtering model is established to reduce the localization RMS error to 15.7983 m. Finally, An outfield experiment was carried out and obtained the optimal results: σ B = 1.63 × 10 - 4 ( ° ) , σ L = 1.35 × 10 - 4 ( ° ) , σ H = 15.8 ( m ) , σ s u m = 27.6 ( m ) , where σ B represents the longitude error, σ L represents the latitude error, σ H represents the altitude error, and σ s u m represents the error radius.
Effect of suspension kinematic on 14 DOF vehicle model
NASA Astrophysics Data System (ADS)
Wongpattananukul, T.; Chantharasenawong, C.
2017-12-01
Computer simulations play a major role in shaping modern science and engineering. They reduce time and resource consumption in new studies and designs. Vehicle simulations have been studied extensively to achieve a vehicle model used in minimum lap time solution. Simulation result accuracy depends on the abilities of these models to represent real phenomenon. Vehicles models with 7 degrees of freedom (DOF), 10 DOF and 14 DOF are normally used in optimal control to solve for minimum lap time. However, suspension kinematics are always neglected on these models. Suspension kinematics are defined as wheel movements with respect to the vehicle body. Tire forces are expressed as a function of wheel slip and wheel position. Therefore, the suspension kinematic relation is appended to the 14 DOF vehicle model to investigate its effects on the accuracy of simulate trajectory. Classical 14 DOF vehicle model is chosen as baseline model. Experiment data is collected from formula student style car test runs as baseline data for simulation and comparison between baseline model and model with suspension kinematic. Results show that in a single long turn there is an accumulated trajectory error in baseline model compared to model with suspension kinematic. While in short alternate turns, the trajectory error is much smaller. These results show that suspension kinematic had an effect on the trajectory simulation of vehicle. Which optimal control that use baseline model will result in inaccuracy control scheme.
Dai, Wujiao; Shi, Qiang; Cai, Changsheng
2017-01-01
The carrier phase multipath effect is one of the most significant error sources in the precise positioning of BeiDou Navigation Satellite System (BDS). We analyzed the characteristics of BDS multipath, and found the multipath errors of geostationary earth orbit (GEO) satellite signals are systematic, whereas those of inclined geosynchronous orbit (IGSO) or medium earth orbit (MEO) satellites are both systematic and random. The modified multipath mitigation methods, including sidereal filtering algorithm and multipath hemispherical map (MHM) model, were used to improve BDS dynamic deformation monitoring. The results indicate that the sidereal filtering methods can reduce the root mean square (RMS) of positioning errors in the east, north and vertical coordinate directions by 15%, 37%, 25% and 18%, 51%, 27% in the coordinate and observation domains, respectively. By contrast, the MHM method can reduce the RMS by 22%, 52% and 27% on average. In addition, the BDS multipath errors in static baseline solutions are a few centimeters in multipath-rich environments, which is different from that of Global Positioning System (GPS) multipath. Therefore, we add a parameter representing the GEO multipath error in observation equation to the adjustment model to improve the precision of BDS static baseline solutions. And the results show that the modified model can achieve an average precision improvement of 82%, 54% and 68% in the east, north and up coordinate directions, respectively. PMID:28387744
Dai, Wujiao; Shi, Qiang; Cai, Changsheng
2017-04-07
The carrier phase multipath effect is one of the most significant error sources in the precise positioning of BeiDou Navigation Satellite System (BDS). We analyzed the characteristics of BDS multipath, and found the multipath errors of geostationary earth orbit (GEO) satellite signals are systematic, whereas those of inclined geosynchronous orbit (IGSO) or medium earth orbit (MEO) satellites are both systematic and random. The modified multipath mitigation methods, including sidereal filtering algorithm and multipath hemispherical map (MHM) model, were used to improve BDS dynamic deformation monitoring. The results indicate that the sidereal filtering methods can reduce the root mean square (RMS) of positioning errors in the east, north and vertical coordinate directions by 15%, 37%, 25% and 18%, 51%, 27% in the coordinate and observation domains, respectively. By contrast, the MHM method can reduce the RMS by 22%, 52% and 27% on average. In addition, the BDS multipath errors in static baseline solutions are a few centimeters in multipath-rich environments, which is different from that of Global Positioning System (GPS) multipath. Therefore, we add a parameter representing the GEO multipath error in observation equation to the adjustment model to improve the precision of BDS static baseline solutions. And the results show that the modified model can achieve an average precision improvement of 82%, 54% and 68% in the east, north and up coordinate directions, respectively.
Wang, Wei; Young, Bessie A; Fülöp, Tibor; de Boer, Ian H; Boulware, L Ebony; Katz, Ronit; Correa, Adolfo; Griswold, Michael E
2015-05-01
The calibration to isotope dilution mass spectrometry-traceable creatinine is essential for valid use of the new Chronic Kidney Disease Epidemiology Collaboration equation to estimate the glomerular filtration rate. For 5,210 participants in the Jackson Heart Study (JHS), serum creatinine was measured with a multipoint enzymatic spectrophotometric assay at the baseline visit (2000-2004) and remeasured using the Roche enzymatic method, traceable to isotope dilution mass spectrometry in a subset of 206 subjects. The 200 eligible samples (6 were excluded, 1 for failure of the remeasurement and 5 for outliers) were divided into 3 disjoint sets-training, validation and test-to select a calibration model, estimate true errors and assess performance of the final calibration equation. The calibration equation was applied to serum creatinine measurements of 5,210 participants to estimate glomerular filtration rate and the prevalence of chronic kidney disease (CKD). The selected Deming regression model provided a slope of 0.968 (95% confidence interval [CI], 0.904-1.053) and intercept of -0.0248 (95% CI, -0.0862 to 0.0366) with R value of 0.9527. Calibrated serum creatinine showed high agreement with actual measurements when applying to the unused test set (concordance correlation coefficient 0.934, 95% CI, 0.894-0.960). The baseline prevalence of CKD in the JHS (2000-2004) was 6.30% using calibrated values compared with 8.29% using noncalibrated serum creatinine with the Chronic Kidney Disease Epidemiology Collaboration equation (P < 0.001). A Deming regression model was chosen to optimally calibrate baseline serum creatinine measurements in the JHS, and the calibrated values provide a lower CKD prevalence estimate.
Cheng, Sen; Sabes, Philip N
2007-04-01
The sensorimotor calibration of visually guided reaching changes on a trial-to-trial basis in response to random shifts in the visual feedback of the hand. We show that a simple linear dynamical system is sufficient to model the dynamics of this adaptive process. In this model, an internal variable represents the current state of sensorimotor calibration. Changes in this state are driven by error feedback signals, which consist of the visually perceived reach error, the artificial shift in visual feedback, or both. Subjects correct for > or =20% of the error observed on each movement, despite being unaware of the visual shift. The state of adaptation is also driven by internal dynamics, consisting of a decay back to a baseline state and a "state noise" process. State noise includes any source of variability that directly affects the state of adaptation, such as variability in sensory feedback processing, the computations that drive learning, or the maintenance of the state. This noise is accumulated in the state across trials, creating temporal correlations in the sequence of reach errors. These correlations allow us to distinguish state noise from sensorimotor performance noise, which arises independently on each trial from random fluctuations in the sensorimotor pathway. We show that these two noise sources contribute comparably to the overall magnitude of movement variability. Finally, the dynamics of adaptation measured with random feedback shifts generalizes to the case of constant feedback shifts, allowing for a direct comparison of our results with more traditional blocked-exposure experiments.
Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination Error Analysis
NASA Technical Reports Server (NTRS)
Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl
2009-01-01
The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited time period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. Absolute and relative position and velocity error hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise error contributi ons over the definitive and predictive arcs and at discrete times inc luding the maneuver planning and execution times. Details of the meth odology, orbital characteristics, maneuver timeline, error models, and error sensitivities are provided.
NASA Technical Reports Server (NTRS)
1982-01-01
An effective data collection methodology for evaluating software development methodologies was applied to four different software development projects. Goals of the data collection included characterizing changes and errors, characterizing projects and programmers, identifying effective error detection and correction techniques, and investigating ripple effects. The data collected consisted of changes (including error corrections) made to the software after code was written and baselined, but before testing began. Data collection and validation were concurrent with software development. Changes reported were verified by interviews with programmers.
An Assessment of Spaceborne Near-Nadir Interferometric SAR Performance Over Inland Waters with Real
NASA Astrophysics Data System (ADS)
Tan, H.; Li, S. Y.; Liu, Z. W.
2018-04-01
Elevation measurements of the continental water surface have been poorly collected with in situ measurements or occasionally with conventional altimeters with low accuracy. Techniques using InSAR at near-nadir angles to measure the inland water elevation with large swath and with high accuracy have been proposed, for instance, the WSOA on Jason 2 and the KaRIn on SWOT. However, the WSOA was abandoned unfortunately and the SWOT is planned to be launched in 2021. In this paper, we show real acquisitions of the first spaceborne InSAR of such kind, the Interferometric Imaging Radar Altimeter (InIRA), which has been working on Tiangong II spacecraft since 2016. We used the 90-m SRTM DEM as a reference to estimate the phase offset, and then an empirical calibration model was used to correct the baseline errors.
Active learning for noisy oracle via density power divergence.
Sogawa, Yasuhiro; Ueno, Tsuyoshi; Kawahara, Yoshinobu; Washio, Takashi
2013-10-01
The accuracy of active learning is critically influenced by the existence of noisy labels given by a noisy oracle. In this paper, we propose a novel pool-based active learning framework through robust measures based on density power divergence. By minimizing density power divergence, such as β-divergence and γ-divergence, one can estimate the model accurately even under the existence of noisy labels within data. Accordingly, we develop query selecting measures for pool-based active learning using these divergences. In addition, we propose an evaluation scheme for these measures based on asymptotic statistical analyses, which enables us to perform active learning by evaluating an estimation error directly. Experiments with benchmark datasets and real-world image datasets show that our active learning scheme performs better than several baseline methods. Copyright © 2013 Elsevier Ltd. All rights reserved.
The MOBID-2 pain scale: Reliability and responsiveness to pain in patients with dementia
Husebo, BS; Ostelo, R; Strand, LI
2014-01-01
Background Mobilization-Observation-Behavior-Intensity-Dementia-2 (MOBID-2) pain scale is a staff-administered pain tool for patients with dementia. This study explores MOBID-2's test–retest reliability, measurement error and responsiveness to change. Methods Analyses are based upon data from a cluster randomized trial including 352 patients with advanced dementia from 18 Norwegian nursing homes. Test–retest reliability between baseline and week 2 (n = 163), and weeks 2 and 4 (n = 159) was examined in patients not expected to change (controls), using intraclass correlation coefficient (ICC2.1), standard error of measurement (SEM) and smallest detectable change (SDC). Responsiveness was examined by testing six priori-formulated hypotheses about the association between change scores on MOBID-2 and other outcome measures. Results ICCs of the total MOBID-2 scores were 0.81 (0–2 weeks) and 0.85 (2–4 weeks). SEM and SDC were 1.9 and 3.1 (0–2 weeks) and 1.4 and 2.3 (2–4 weeks), respectively. Five out of six hypotheses were confirmed: MOBID-2 discriminated (p < 0.001) between change in patients with and without a stepwise protocol for treatment of pain (SPTP). Moderate association (r = 0.35) was demonstrated with Cohen-Mansfield Agitation Inventory, and no association with Mini-Mental State Examination, Functional Assessment Staging and Activity of Daily Living. Expected associations between change scores of MOBID-2 and Neuropsychiatric Inventory – Nursing Home version were not confirmed. Conclusion The SEM and SDC in connection with the MOBID-2 pain scale indicate that the instrument is responsive to a decrease in pain after a SPTP. Satisfactory test–retest reliability across test periods was demonstrated. Change scores ≥ 3 on total and subscales are clinically relevant and are beyond measurement error. PMID:24799157
Alakuijala, Anniina; Maasilta, Paula; Bachour, Adel
2014-01-01
Study Objectives: The Oxford Sleep Resistance Test (OSLER) is a behavioral test that measures a subject's ability to maintain wakefulness and assesses daytime vigilance. The multiple unprepared reaction time (MURT) test measures a subject's reaction time in response to a series of visual or audible stimuli. Methods: We recruited 34 healthy controls in order to determine the normative data for MURT. Then we evaluated modifications in OSLER and MURT values in 192 patients who were referred for suspicion of sleep apnea. We performed OSLER (three 40-min sessions) and MURT (two 10-min sessions) tests at baseline. Of 173 treated OSA patients, 29 professional drivers were retested within six months of treatment. Results: MURT values above 250 ms can be considered abnormal. The OSLER error index (the number of all errors divided by the duration of the session in hours) correlated statistically significantly with sleep latency, MURT time, and ESS. Treatment improved OSLER sleep latency from 33 min 4 s to 36 min 48 s, OSLER error index from 66/h to 26/h, and MURT time from 278 ms to 224 ms; these differences were statistically significant. Conclusions: OSLER and MURT tests are practical and reliable tools for measuring improvement in vigilance due to sleep apnea therapy in professional drivers. Citation: Alakuijala A, Maasilta P, Bachour A. The Oxford Sleep Resistance Test (OSLER) and the multiple unprepared reaction time test (MURT) detect vigilance modifications in sleep apnea patients. J Clin Sleep Med 2014;10(10):1075-1082. PMID:25317088
Clark, N M; Janz, N K; Dodge, J A; Schork, M A; Fingerlin, T E; Wheeler, J R; Liang, J; Keteyian, S J; Santinga, J T
2000-03-01
This study involving 570 women aged 60 years or older with heart disease, assessed the effects of a disease management program on physical functioning, symptom experience, and psychosocial status. Women were randomly assigned to control or program groups. Six to eight women met weekly with a health educator and peer leader over 4 weeks to learn self-regulation skills with physical activity as the focus. Evaluative data were collected through telephone interviews, physical assessments, and medical records at baseline and 4 and 12 months post baseline. At 12 months, compared with controls, program women were less symptomatic (p < .01), scored better on the physical dimension of the Sickness Impact Profile (SIP; p < 0.05), had improved ambulation as measured by the 6-minute walk (p < 0.01), and lost more body weight (p < .001). No differences related to psychosocial factors as measured by the SIP were noted. A self-regulation-based program that was provided to older women with heart disease and that focused on physical activity and disease management problems salient to them, improved their physical functioning and symptom experience. Psychosocial benefit was not evident and may be a result of measurement error or due to insufficient program time spent on psychosocial aspects of functioning.
Accuracy of selected techniques for estimating ice-affected streamflow
Walker, John F.
1991-01-01
This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.
Outcome of cataract surgery at one year in Kenya, the Philippines and Bangladesh.
Lindfield, R; Kuper, H; Polack, S; Eusebio, C; Mathenge, W; Wadud, Z; Rashid, A M; Foster, A
2009-07-01
To assess the change in vision following cataract surgery in Kenya, Bangladesh and the Philippines and to identify causes and predictors of poor outcome. Cases were identified through surveys, outreach and clinics. They underwent preoperative visual acuity measurement and ophthalmic examination. Cases were re-examined 8-15 months after cataract surgery. Information on age, gender, poverty and literacy was collected at baseline. 452 eyes of 346 people underwent surgery. 124 (27%) eyes had an adverse outcome. In Kenya and the Philippines, the main cause of adverse outcome was refractive error (37% and 49% respectively of all adverse outcomes) then comorbid ocular disease (26% and 27%). In Bangladesh, this was comorbid disease (58%) then surgical complications (21%). There was no significant association between adverse outcome and gender, age, literacy, poverty or preoperative visual acuity. Adverse outcomes following cataract surgery were frequent in the three countries. Main causes were refractive error and preoperative comorbidities. Many patients are not attaining the outcomes available with modern surgery. Focus should be on correcting refractive error, through operative techniques or postoperative refraction, and on a system for assessing comorbidities and communicating risk to patients. These are only achievable with a commitment to ongoing surgical audit.
Normative Values of the Sport Concussion Assessment Tool 3 (SCAT3) in High School Athletes.
Snedden, Traci R; Brooks, Margaret Alison; Hetzel, Scott; McGuine, Tim
2017-09-01
Establish sex, age, and concussion history-specific normative baseline sport concussion assessment tool 3 (SCAT3) values in adolescent athletes. Prospective cohort. Seven Wisconsin high schools. Seven hundred fifty-eight high school athletes participating in 19 sports. Sex, age, and concussion history. Sport Concussion Assessment Tool 3 (SCAT3): total number of symptoms; symptom severity; total Standardized Assessment of Concussion (SAC); and each SAC component (orientation, immediate memory, concentration, delayed recall); Balance Error Scoring System (BESS) total errors (BESS, floor and foam pad). Males reported a higher total number of symptoms [median (interquartile range): 0 (0-2) vs 0 (0-1), P = 0.001] and severity of symptoms [0 (0-3) vs 0 (0-2), P = 0.001] and a lower mean (SD) total SAC [26.0 (2.3) vs 26.4 (2.0), P = 0.026], and orientation [5 (4-5) vs 5 (5-5), P = 0.021]. There was no difference in baseline scores between sex for immediate memory, concentration, delayed recall or BESS total errors. No differences were found for any test domain based on age. Previously, concussed athletes reported a higher total number of symptoms [1 (0-4) vs 0 (0-2), P = 0.001] and symptom severity [2 (0-5) vs 0 (0-2), P = 0.001]. BESS total scores did not differ by concussion history. This study represents the first published normative baseline SCAT3 values in high school athletes. Results varied by sex and history of previous concussion but not by age. The normative baseline values generated from this study will help clinicians better evaluate and interpret SCAT3 results of concussed adolescent athletes.
Continuous monitoring of surface deformation at Long Valley Caldera, California, with GPS
Dixon, T.H.; Mao, A.; Bursik, M.; Heflin, M.; Langbein, J.; Stein, R.; Webb, F.
1997-01-01
Continuous Global Positioning System (GPS) measurements at Long Valley Caldera, an active volcanic region in east central California, have been made on the south side of the resurgent dome since early 1993. A site on the north side of the dome was added in late 1994. Special adaptations for autonomous operation in remote regions and enhanced vertical precision were made. The data record ongoing volcanic deformation consistent with uplift and expansion of the surface above a shallow magma chamber. Measurement precisions (1 standard error) for "absolute" position coordinates, i.e., relative to a global reference frame, are 3-4 mm (north), 5-6 mm (east), and 10-12 mm (vertical) using 24 hour solutions. Corresponding velocity uncertainties for a 12 month period are about 2 mm/yr in the horizontal components and 3-4 mm/yr in the vertical component. High precision can also be achieved for relative position coordinates on short (<10 km) baselines using broadcast ephemerides and observing times as short as 3 hours, even when data are processed rapidly on site. Comparison of baseline length changes across the resurgent dome between the two GPS sites and corresponding two-color electronic distance measurements indicates similar extension rates within error (???2 mm/yr) once we account for a random walk noise component in both systems that may reflect spurious monument motion. Both data sets suggest a pause in deformation for a 3.5 month period in mid-1995, when the extension rate across the dome decreased essentially to zero. Three dimensional positioning data from the two GPS stations suggest a depth (5.8??1.6 km) and location (west side of the resurgent dome) of a major inflation center, in agreement with other geodetic techniques, near the top of a magma chamber inferred from seismic data. GPS systems similar to those installed at Long Valley can provide a practical method for near real-time monitoring and hazard assessment on many active volcanoes.
Accurately Mapping M31's Microlensing Population
NASA Astrophysics Data System (ADS)
Crotts, Arlin
2004-07-01
We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity function over much of M31.
Spatial heterogeneity of type I error for local cluster detection tests
2014-01-01
Background Just as power, type I error of cluster detection tests (CDTs) should be spatially assessed. Indeed, CDTs’ type I error and power have both a spatial component as CDTs both detect and locate clusters. In the case of type I error, the spatial distribution of wrongly detected clusters (WDCs) can be particularly affected by edge effect. This simulation study aims to describe the spatial distribution of WDCs and to confirm and quantify the presence of edge effect. Methods A simulation of 40 000 datasets has been performed under the null hypothesis of risk homogeneity. The simulation design used realistic parameters from survey data on birth defects, and in particular, two baseline risks. The simulated datasets were analyzed using the Kulldorff’s spatial scan as a commonly used test whose behavior is otherwise well known. To describe the spatial distribution of type I error, we defined the participation rate for each spatial unit of the region. We used this indicator in a new statistical test proposed to confirm, as well as quantify, the edge effect. Results The predefined type I error of 5% was respected for both baseline risks. Results showed strong edge effect in participation rates, with a descending gradient from center to edge, and WDCs more often centrally situated. Conclusions In routine analysis of real data, clusters on the edge of the region should be carefully considered as they rarely occur when there is no cluster. Further work is needed to combine results from power studies with this work in order to optimize CDTs performance. PMID:24885343
Measurement of nutritional status in simulated microgravity by bioelectrical impedance spectroscopy
NASA Technical Reports Server (NTRS)
Bartok, Cynthia; Atkinson, Richard L.; Schoeller, Dale A.
2003-01-01
The potential of bioelectrical impedance spectroscopy (BIS) for assessing nutritional status in spaceflight was tested in two head-down-tilt bed-rest studies. BIS-predicted extracellular water (ECW), intracellular water (ICW), and total body water (TBW) measured using knee-elbow electrode placement were compared with deuterium and bromide dilution (DIL) volumes in healthy, 19- to 45-yr-old subjects. BIS was accurate during 44 h of head-down tilt with mean differences (BIS - DIL) of 0-0.1 kg for ECW, 0.3-0.5 for ICW, and 0.4-0.6 kg for TBW (n = 28). At 44 h, BIS followed the within-individual change in body water compartments with a relative prediction error (standard error of the estimate/baseline volume) of 2.0-3.6% of water space. In the second study, BIS did not detect an acute decrease (-1.41 +/- 0.91 kg) in ICW secondary to 48 h of a protein-free, 800 kcal/day diet (n = 18). BIS's insensitivity to ICW losses may be because they were predominantly (65%) localized to the trunk and/or because there was a general failure of BIS to measure ICW independently of ECW and TBW. BIS may have potential for measuring nutritional status during spaceflight, but its limitations in precision and insensitivity to acute ICW changes warrant further validation studies.
Kousi, Evanthia; O'Flynn, Elizabeth A M; Borri, Marco; Morgan, Veronica A; deSouza, Nandita M; Schmidt, Maria A
2018-05-31
Baseline T2* relaxation time has been proposed as an imaging biomarker in cancer, in addition to Dynamic Contrast-Enhanced (DCE) MRI and diffusion-weighted imaging (DWI) parameters. The purpose of the current work is to investigate sources of error in T2* measurements and the relationship between T2* and DCE and DWI functional parameters in breast cancer. Five female volunteers and thirty-two women with biopsy proven breast cancer were scanned at 3 T, with Research Ethics Committee approval. T2* values of the normal breast were acquired from high-resolution, low-resolution and fat-suppressed gradient-echo sequences in volunteers, and compared. In breast cancer patients, pre-treatment T2*, DCE MRI and DWI were performed at baseline. Pathologically complete responders at surgery and non-responders were identified and compared. Principal component analysis (PCA) and cluster analysis (CA) were performed. There were no significant differences between T2* values from high-resolution, low-resolution and fat-suppressed datasets (p > 0.05). There were not significant differences between baseline functional parameters in responders and non-responders (p > 0.05). However, there were differences in the relationship between T2* and contrast-agent uptake in responders and non-responders. Voxels of similar characteristics were grouped in 5 clusters, and large intra-tumoural variations of all parameters were demonstrated. Breast T2* measurements at 3 T are robust, but spatial resolution should be carefully considered. T2* of breast tumours at baseline is unrelated to DCE and DWI parameters and contribute towards describing functional heterogeneity of breast tumours. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yamamoto, R.; Hino, R.; Kido, M.; Osada, Y.; Honsho, C.
2017-12-01
Since postseismic deformation across 2011 Tohoku-oki Earthquake is strongly affected by viscoelastic relaxation, it is difficult to identify postseismic slip from onshore (e.g. GNSS) and offshore (e.g. GPS-Acoustic: GPS-A) observations. To track postseismic slip directly, we installed acoustic ranging instruments across the axis of the central Japan Trench, off-Miyagi, near the region of large coseismic motion (>50 m) happened during 2011 Tohoku-oki Earthquake.Direct Path Ranging (DPR) measures two-way travel time between a pair of transponders settled on the seafloor. Baseline length can be obtained from calculating travel time and sound velocity which is corrected for time-varying temperature and pressure beforehand. We further made correction for the motion of acoustic elements due to attitude changes of the instruments. Baseline changes can be detected precisely by periodic ranging during observation.We have conducted observations during three times (2013, 2014 - 2015, and 2015 - 2016), and revealed that no significant shortenings across the trench axis took place. It follows that no shallow postseismic slip had occurred off-Miyagi, at least from 2013 to 2016. We examined the accuracy of baseline length measurements and can observed 1.0 ppm (1.0 mm for 1 km baseline) errors, which is small enough. Our results are consistent with the postseismic slip distribution model based on GPS-A observations.Acknowledgements: This research is supported by JSPS KAKENHI (26000002). The installation and recovery of instruments were executed during R/V Kairei (KR13-09; KR15-15), R/V Hakuho-maru (KH-13-05; KH-17-J02), R/V Shinsei-maru (KS-14-17; KS-15-03; KS-16-14).
The Effect of Adenotonsillectomy for Childhood Sleep Apnea on Cardiometabolic Measures.
Quante, Mirja; Wang, Rui; Weng, Jia; Rosen, Carol L; Amin, Raouf; Garetz, Susan L; Katz, Eliot; Paruthi, Shalini; Arens, Raanan; Muzumdar, Hiren; Marcus, Carole L; Ellenberg, Susan; Redline, Susan
2015-09-01
Obstructive sleep apnea syndrome (OSAS) has been associated with cardiometabolic disease in adults. In children, this association is unclear. We evaluated the effect of early adenotonsillectomy (eAT) for treatment of OSAS on blood pressure, heart rate, lipids, glucose, insulin, and C-reactive protein. We also analyzed whether these parameters at baseline and changes at follow-up correlated with polysomnographic indices. Data collected at baseline and 7-mo follow-up were analyzed from a randomized controlled trial, the Childhood Adenotonsillectomy Trial (CHAT). Clinical referral setting from multiple centers. There were 464 children, ages 5 to 9.9 y with OSAS without severe hypoxemia. Randomization to eAT or Watchful Waiting with Supportive Care (WWSC). There was no significant change of cardiometabolic parameters over the 7-mo interval in the eAT group compared to WWSC group. However, overnight heart rate was incrementally higher in association with baseline OSAS severity (average heart rate increase of 3 beats per minute [bpm] for apnea-hypopnea index [AHI] of 2 versus 10; [standard error = 0.60]). Each 5-unit improvement in AHI and 5 mmHg improvement in peak end-tidal CO2 were estimated to reduce heart rate by 1 and 1.5 bpm, respectively. An increase in N3 sleep also was associated with small reductions in systolic blood pressure percentile. There is little variation in standard cardiometabolic parameters in children with obstructive sleep apnea syndrome (OSAS) but without severe hypoxemia at baseline or after intervention. Of all measures, overnight heart rate emerged as the most sensitive parameter of pediatric OSAS severity. Clinicaltrials.gov (#NCT00560859). © 2015 Associated Professional Sleep Societies, LLC.
Arshad, Q; Siddiqui, S; Ramachandran, S; Goga, U; Bonsu, A; Patel, M; Roberts, R E; Nigmatullina, Y; Malhotra, P; Bronstein, A M
2015-12-17
Right hemisphere dominance for visuo-spatial attention is characteristically observed in most right-handed individuals. This dominance has been attributed to both an anatomically larger right fronto-parietal network and the existence of asymmetric parietal interhemispheric connections. Previously it has been demonstrated that interhemispheric conflict, which induces left hemisphere inhibition, results in the modulation of both (i) the excitability of the early visual cortex (V1) and (ii) the brainstem-mediated vestibular-ocular reflex (VOR) via top-down control mechanisms. However to date, it remains unknown whether the degree of an individual's right hemisphere dominance for visuospatial function can influence, (i) the baseline excitability of the visual cortex and (ii) the extent to which the right hemisphere can exert top-down modulation. We directly tested this by correlating line bisection error (or pseudoneglect), taken as a measure of right hemisphere dominance, with both (i) visual cortical excitability measured using phosphene perception elicited via single-pulse occipital trans-cranial magnetic stimulation (TMS) and (ii) the degree of trans-cranial direct current stimulation (tDCS)-mediated VOR suppression, following left hemisphere inhibition. We found that those individuals with greater right hemisphere dominance had a less excitable early visual cortex at baseline and demonstrated a greater degree of vestibular nystagmus suppression following left hemisphere cathodal tDCS. To conclude, our results provide the first demonstration that individual differences in right hemisphere dominance can directly predict both the baseline excitability of low-level brain structures and the degree of top-down modulation exerted over them. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Tao, Qiuxiang; Gao, Tengfei; Liu, Guolin; Wang, Zhiwei
2017-04-01
The external digital elevation model (DEM) error is one of the main factors that affect the accuracy of mine subsidence monitored by two-pass differential interferometric synthetic aperture radar (DInSAR), which has been widely used in monitoring mining-induced subsidence. The theoretical relationship between external DEM error and monitored deformation error is derived based on the principles of interferometric synthetic aperture radar (DInSAR) and two-pass DInSAR. Taking the Dongtan and Yangcun mine areas of Jining as test areas, the difference and accuracy of 1:50000, ASTER GDEM V2, and SRTM DEMs are compared and analyzed. Two interferometric pairs of Advanced Land Observing Satellite Phased Array L-band SAR covering the test areas are processed using two-pass DInSAR with three external DEMs to compare and analyze the effect of three external DEMs on monitored mine subsidence in high- and low-coherence subsidence regions. Moreover, the reliability and accuracy of the three DInSAR-monitored results are compared and verified with leveling-measured subsidence values. Results show that the effect of external DEM on mine subsidence monitored by two-pass DInSAR is not only related to radar look angle, perpendicular baseline, slant range, and external DEM error, but also to the ground resolution of DEM, the magnitude of subsidence, and the coherence of test areas.
Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals
NASA Astrophysics Data System (ADS)
Goswami, S.; Flury, J.
2016-12-01
In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.
Error modeling for differential GPS. M.S. Thesis - MIT, 12 May 1995
NASA Technical Reports Server (NTRS)
Blerman, Gregory S.
1995-01-01
Differential Global Positioning System (DGPS) positioning is used to accurately locate a GPS receiver based upon the well-known position of a reference site. In utilizing this technique, several error sources contribute to position inaccuracy. This thesis investigates the error in DGPS operation and attempts to develop a statistical model for the behavior of this error. The model for DGPS error is developed using GPS data collected by Draper Laboratory. The Marquardt method for nonlinear curve-fitting is used to find the parameters of a first order Markov process that models the average errors from the collected data. The results show that a first order Markov process can be used to model the DGPS error as a function of baseline distance and time delay. The model's time correlation constant is 3847.1 seconds (1.07 hours) for the mean square error. The distance correlation constant is 122.8 kilometers. The total process variance for the DGPS model is 3.73 sq meters.
Trauma Quality Improvement: Reducing Triage Errors by Automating the Level Assignment Process.
Stonko, David P; O Neill, Dillon C; Dennis, Bradley M; Smith, Melissa; Gray, Jeffrey; Guillamondegui, Oscar D
2018-04-12
Trauma patients are triaged by the severity of their injury or need for intervention while en route to the trauma center according to trauma activation protocols that are institution specific. Significant research has been aimed at improving these protocols in order to optimize patient outcomes while striving for efficiency in care. However, it is known that patients are often undertriaged or overtriaged because protocol adherence remains imperfect. The goal of this quality improvement (QI) project was to improve this adherence, and thereby reduce the triage error. It was conducted as part of the formal undergraduate medical education curriculum at this institution. A QI team was assembled and baseline data were collected, then 2 Plan-Do-Study-Act (PDSA) cycles were implemented sequentially. During the first cycle, a novel web tool was developed and implemented in order to automate the level assignment process (it takes EMS-provided data and automatically determines the level); the tool was based on the existing trauma activation protocol. The second PDSA cycle focused on improving triage accuracy in isolated, less than 10% total body surface area burns, which we identified to be a point of common error. Traumas were reviewed and tabulated at the end of each PDSA cycle, and triage accuracy was followed with a run chart. This study was performed at Vanderbilt University Medical Center and Medical School, which has a large level 1 trauma center covering over 75,000 square miles, and which sees urban, suburban, and rural trauma. The baseline assessment period and each PDSA cycle lasted 2 weeks. During this time, all activated, adult, direct traumas were reviewed. There were 180 patients during the baseline period, 189 after the first test of change, and 150 after the second test of change. All were included in analysis. Of 180 patients, 30 were inappropriately triaged during baseline analysis (3 undertriaged and 27 overtriaged) versus 16 of 189 (3 undertriaged and 13 overtriaged) following implementation of the web tool (p = 0.017 for combined errors). Overtriage dropped further from baseline to 10/150 after the second test of change (p = 0.005). The total number of triaged patients dropped from 92.3/week to 75.5/week after the second test of change. There was no statistically significant change in the undertriage rate. The combination of web tool implementation and protocol refinement decreased the combined triage error rate by over 50% (from 16.7%-7.9%). We developed and tested a web tool that improved triage accuracy, and provided a sustainable method to enact future quality improvement. This web tool and QI framework would be easily expandable to other hospitals. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Benefits Derived From Laser Ranging Measurements for Orbit Determination of the GPS Satellite Orbit
NASA Technical Reports Server (NTRS)
Welch, Bryan W.
2007-01-01
While navigation systems for the determination of the orbit of the Global Position System (GPS) have proven to be very effective, the current research is examining methods to lower the error in the GPS satellite ephemerides below their current level. Two GPS satellites that are currently in orbit carry retro-reflectors onboard. One notion to reduce the error in the satellite ephemerides is to utilize the retro-reflectors via laser ranging measurements taken from multiple Earth ground stations. Analysis has been performed to determine the level of reduction in the semi-major axis covariance of the GPS satellites, when laser ranging measurements are supplemented to the radiometric station keeping, which the satellites undergo. Six ground tracking systems are studied to estimate the performance of the satellite. The first system is the baseline current system approach which provides pseudo-range and integrated Doppler measurements from six ground stations. The remaining five ground tracking systems utilize all measurements from the current system and laser ranging measurements from the additional ground stations utilized within those systems. Station locations for the additional ground sites were taken from a listing of laser ranging ground stations from the International Laser Ranging Service. Results show reductions in state covariance estimates when utilizing laser ranging measurements to solve for the satellite s position component of the state vector. Results also show dependency on the number of ground stations providing laser ranging measurements, orientation of the satellite to the ground stations, and the initial covariance of the satellite's state vector.
ERIC Educational Resources Information Center
Turner, Jill; Rafferty, Lisa A.; Sullivan, Ray; Blake, Amy
2017-01-01
In this action research case study, the researchers used a multiple baseline across two student pairs design to investigate the effects of the error self-correction method on the spelling accuracy behaviors for four fifth-grade students who were identified as being at risk for learning disabilities. The dependent variable was the participants'…
Development of TPS flight test and operational instrumentation
NASA Technical Reports Server (NTRS)
Carnahan, K. R.; Hartman, G. J.; Neuner, G. J.
1975-01-01
Thermal and flow sensor instrumentation was developed for use as an integral part of the space shuttle orbiter reusable thermal protection system. The effort was performed in three tasks: a study to determine the optimum instruments and instrument installations for the space shuttle orbiter RSI and RCC TPS; tests and/or analysis to determine the instrument installations to minimize measurement errors; and analysis using data from the test program for comparison to analytical methods. A detailed review of existing state of the art instrumentation in industry was performed to determine the baseline for the departure of the research effort. From this information, detailed criteria for thermal protection system instrumentation were developed.
Prospects for UT1 Measurements from VLBI Intensive Sessions
NASA Technical Reports Server (NTRS)
Boehm, Johannes; Nilsson, Tobias; Schuh, Harald
2010-01-01
Very Long Baseline Interferometry (VLBI) Intensives are one-hour single baseline sessions to provide Universal Time (UT1) in near real-time up to a delay of three days if a site is not e-transferring the observational data. Due to the importance of UT1 estimates for the prediction of Earth orientation parameters, as well as any kind of navigation on Earth or in space, there is not only the need to improve the timeliness of the results but also their accuracy. We identify the asymmetry of the tropospheric delays as the major error source, and we provide two strategies to improve the results, in particular of those Intensives which include the station Tsukuba in Japan with its large tropospheric variation. We find an improvement when (1) using ray-traced delays from a numerical weather model, and (2) when estimating tropospheric gradients within the analysis of Intensive sessions. The improvement is shown in terms of reduction of rms of length-of-day estimates w.r.t. those derived from Global Positioning System observations
Alaimo, Katherine; Carlson, Joseph J; Pfeiffer, Karin A; Eisenmann, Joey C; Paek, Hye-Jin; Betz, Heather H; Thompson, Tracy; Wen, Yalu; Norman, Gregory J
2015-08-01
Project FIT was a two-year multi-component nutrition and physical activity intervention delivered in ethnically-diverse low-income elementary schools in Grand Rapids, MI. This paper reports effects on children's nutrition outcomes and process evaluation of the school component. A quasi-experimental design was utilized. 3rd, 4th and 5th-grade students (Yr 1 baseline: N = 410; Yr 2 baseline: N = 405; age range: 7.5-12.6 years) were measured in the fall and spring over the two-year intervention. Ordinal logistic, mixed effect models and generalized estimating equations were fitted, and the robust standard errors were utilized. Primary outcomes favoring the intervention students were found regarding consumption of fruits, vegetables and whole grain bread during year 2. Process evaluation revealed that implementation of most intervention components increased during year 2. Project FIT resulted in small but beneficial effects on consumption of fruits, vegetables, and whole grain bread in ethnically diverse low-income elementary school children.
Loughery, Brian; Knill, Cory; Silverstein, Evan; Zakjevskii, Viatcheslav; Masi, Kathryn; Covington, Elizabeth; Snyder, Karen; Song, Kwang; Snyder, Michael
2018-03-20
We conducted a multi-institutional assessment of a recently developed end-to-end monthly quality assurance (QA) protocol for external beam radiation therapy treatment chains. This protocol validates the entire treatment chain against a baseline to detect the presence of complex errors not easily found in standard component-based QA methods. Participating physicists from 3 institutions ran the end-to-end protocol on treatment chains that include Imaging and Radiation Oncology Core (IROC)-credentialed linacs. Results were analyzed in the form of American Association of Physicists in Medicine (AAPM) Task Group (TG)-119 so that they may be referenced by future test participants. Optically stimulated luminescent dosimeter (OSLD), EBT3 radiochromic film, and A1SL ion chamber readings were accumulated across 10 test runs. Confidence limits were calculated to determine where 95% of measurements should fall. From calculated confidence limits, 95% of measurements should be within 5% error for OSLDs, 4% error for ionization chambers, and 4% error for (96% relative gamma pass rate) radiochromic film at 3% agreement/3 mm distance to agreement. Data were separated by institution, model of linac, and treatment protocol (intensity-modulated radiation therapy [IMRT] vs volumetric modulated arc therapy [VMAT]). A total of 97% of OSLDs, 98% of ion chambers, and 93% of films were within the confidence limits; measurements were found outside these limits by a maximum of 4%, < 1%, and < 1%, respectively. Data were consistent despite institutional differences in OSLD reading equipment and radiochromic film calibration techniques. Results from this test may be used by clinics for data comparison. Areas of improvement were identified in the end-to-end protocol that can be implemented in an updated version. The consistency of our data demonstrates the reproducibility and ease-of-use of such tests and suggests a potential role for their use in broad end-to-end QA initiatives. Copyright © 2018 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Opshaug, Guttorm Ringstad
There are times and places where conventional navigation systems, such as the Global Positioning System (GPS), are unavailable due to anything from temporary signal occultations to lack of navigation system infrastructure altogether. The goal of the Leapfrog Navigation System (LNS) is to provide localized positioning services for such cases. The concept behind leapfrog navigation is to advance a group of navigation units teamwise into an area of interest. In a practical 2-D case, leapfrogging assumes known initial positions of at least two currently stationary navigation units. Two or more mobile units can then start to advance into the area of interest. The positions of the mobiles are constantly being calculated based on cross-range distance measurements to the stationary units, as well as cross-ranges among the mobiles themselves. At some point the mobile units stop, and the stationary units are released to move. This second team of units (now mobile) can then overtake the first team (now stationary) and travel even further towards the common goal of the group. Since there always is one stationary team, the position of any unit can be referenced back to the initial positions. Thus, LNS provides absolute positioning. I developed navigation algorithms needed to solve leapfrog positions based on cross-range measurements. I used statistical tools to predict how position errors would grow as a function of navigation unit geometry, cross-range measurement accuracy and previous position errors. Using this knowledge I predicted that a 4-unit Leapfrog Navigation System using 100 m baselines and 200 m leap distances could travel almost 15 km before accumulating absolute position errors of 10 m (1sigma). Finally, I built a prototype leapfrog navigation system using 4 GPS transceiver ranging units. I placed the 4 units in the vertices a 10m x 10m square, and leapfrogged the group 20 meters forwards, and then back again (40 m total travel). Average horizontal RMS position errors never exceeded 16 cm during these field tests.
NASA Astrophysics Data System (ADS)
Doin, Marie-Pierre; Lodge, Felicity; Guillaso, Stephane; Jolivet, Romain; Lasserre, Cecile; Ducret, Gabriel; Grandin, Raphael; Pathier, Erwan; Pinel, Virginie
2012-01-01
We assemble a processing chain that handles InSAR computation from raw data to time series analysis. A large part of the chain (from raw data to geocoded unwrapped interferograms) is based on ROI PAC modules (Rosen et al., 2004), with original routines rearranged and combined with new routines to process in series and in a common radar geometry all SAR images and interferograms. A new feature of the software is the range-dependent spectral filtering to improve coherence in interferograms with long spatial baselines. Additional components include a module to estimate and remove digital elevation model errors before unwrapping, a module to mitigate the effects of the atmospheric phase delay and remove residual orbit errors, and a module to construct the phase change time series from small baseline interferograms (Berardino et al. 2002). This paper describes the main elements of the processing chain and presents an example of application of the software using a data set from the ENVISAT mission covering the Etna volcano.
NASA Astrophysics Data System (ADS)
Orosz, G.; Imai, H.; Dodson, R.; Rioja, M. J.; Frey, S.; Burns, R. A.; Etoka, S.; Nakagawa, A.; Nakanishi, H.; Asaki, Y.; Goldman, S. R.; Tafoya, D.
2017-03-01
We report on the measurement of the trigonometric parallaxes of 1612 MHz hydroxyl masers around two asymptotic giant branch stars, WX Psc and OH 138.0+7.2, using the NRAO Very Long Baseline Array with in-beam phase referencing calibration. We obtain a 3σ upper limit of ≤5.3 mas on the parallax of WX Psc, corresponding to a lower limit distance estimate of ≳190 pc. The obtained parallax of OH 138.0+7.2 is 0.52 ± 0.09 mas (±18%), corresponding to a distance of {1.9}-0.3+0.4 {kpc}, making this the first hydroxyl maser parallax below one milliarcsecond. We also introduce a new method of error analysis for detecting systematic errors in the astrometry. Finally, we compare our trigonometric distances to published phase-lag distances toward these stars and find a good agreement between the two methods.
Spacecraft-spacecraft very long baseline interferometry for planetary approach navigation
NASA Technical Reports Server (NTRS)
Edwards, Charles D., Jr.; Folkner, William M.; Border, James S.; Wood, Lincoln J.
1991-01-01
The study presents an error budget for Delta differential one-way range (Delta-DOR) measurements between two spacecraft. Such observations, made between a planetary orbiter (or lander) and another spacecraft approaching that planet, would provide a powerful target-relative angular tracking data type for approach navigation. Accuracies of about 5 nrad should be possible for a pair of X-band spacecraft incorporating 40-MHz DOR tone spacings, while accuracies approaching 1 nrad will be possible if the spacecraft incorporate Ka-band downlinks with DOR tone spacings of order 250 MHz. Operational advantages of this data type are discussed, and ground system requirements needed to enable S/C-S/C Delta-DOR observations are outlined. A covariance analysis is presented to examine the potential navigation improvement for this scenario. The results show factors of 2-3 improvement in spacecraft targeting over conventional Doppler, range, and quasar-relative VLBI, along with reduced sensitivity to ephemeris uncertainty and other systematic errors.
Real-time single-frequency GPS/MEMS-IMU attitude determination of lightweight UAVs.
Eling, Christian; Klingbeil, Lasse; Kuhlmann, Heiner
2015-10-16
In this paper, a newly-developed direct georeferencing system for the guidance, navigation and control of lightweight unmanned aerial vehicles (UAVs), having a weight limit of 5 kg and a size limit of 1.5 m, and for UAV-based surveying and remote sensing applications is presented. The system is intended to provide highly accurate positions and attitudes (better than 5 cm and 0.5°) in real time, using lightweight components. The main focus of this paper is on the attitude determination with the system. This attitude determination is based on an onboard single-frequency GPS baseline, MEMS (micro-electro-mechanical systems) inertial sensor readings, magnetic field observations and a 3D position measurement. All of this information is integrated in a sixteen-state error space Kalman filter. Special attention in the algorithm development is paid to the carrier phase ambiguity resolution of the single-frequency GPS baseline observations. We aim at a reliable and instantaneous ambiguity resolution, since the system is used in urban areas, where frequent losses of the GPS signal lock occur and the GPS measurement conditions are challenging. Flight tests and a comparison to a navigation-grade inertial navigation system illustrate the performance of the developed system in dynamic situations. Evaluations show that the accuracies of the system are 0.05° for the roll and the pitch angle and 0.2° for the yaw angle. The ambiguities of the single-frequency GPS baseline can be resolved instantaneously in more than 90% of the cases.
Dietary fiber and progression of atherosclerosis: the Los Angeles Atherosclerosis Study.
Wu, Huiyun; Dwyer, Kathleen M; Fan, Zhihong; Shircore, Anne; Fan, Jing; Dwyer, James H
2003-12-01
Several epidemiologic studies found weak protective relations between dietary fiber intake and the risk of cardiovascular disease events. However, few of the studies addressed possible mechanisms of the effect. In the present study, we estimated relations between the progression of atherosclerosis and the intake of selective dietary fiber fractions. Mediation of the relations by serum lipids was also investigated. Participants who were free of heart disease and aged 40-60 y were recruited into the cohort (n = 573; 47% women). The intima-media thickness (IMT) of the common carotid arteries was measured ultrasonographically at the baseline examination and at 2 follow-up examinations (n = 500), dietary intakes were assessed with six 24-h recalls (3 at baseline and 3 at the first follow-up examination), and blood samples were analyzed at baseline and at both follow-up examinations. A significant inverse association was observed between IMT progression and the intakes of viscous fiber (P = 0.05) and pectin (P = 0.01). Correction for measurement error increased the magnitude of these estimated effects. The ratio of total to HDL cholesterol was inversely related to the intakes of total fiber (P = 0.01), viscous fiber (P = 0.05), and pectin (P = 0.01). The magnitude of the association between IMT progression and the intakes of viscous fiber and pectin was attenuated by adjustment for serum lipids. The intake of viscous fiber, especially pectin, appears to protect against IMT progression. Serum lipids may act as a mediator between dietary fiber intake and IMT progression.
Troposphere Delay Raytracing Applied in VLBI Analysis
NASA Astrophysics Data System (ADS)
Eriksson, David; MacMillan, Daniel; Gipson, John
2014-12-01
Tropospheric delay modeling error is one of the largest sources of error in VLBI analysis. For standard operational solutions, we use the VMF1 elevation-dependent mapping functions derived from European Centre for Medium Range Forecasting (ECMWF) data. These mapping functions assume that tropospheric delay at a site is azimuthally symmetric. As this assumption does not reflect reality, we have instead determined the raytrace delay along the signal path through the three-dimensional troposphere refractivity field for each VLBI quasar observation. We calculated the troposphere refractivity fields from the pressure, temperature, specific humidity, and geopotential height fields of the NASA GSFC GEOS-5 numerical weather model. We discuss results using raytrace delay in the analysis of the CONT11 R&D sessions. When applied in VLBI analysis, baseline length repeatabilities were better for 70% of baselines with raytraced delays than with VMF1 mapping functions. Vertical repeatabilities were better for 2/3 of all stations. The reference frame scale bias error was 0.02 ppb for raytracing versus 0.08 ppb and 0.06 ppb for VMF1 and NMF, respectively.
The effect of tracking network configuration on GPS baseline estimates for the CASA Uno experiment
NASA Technical Reports Server (NTRS)
Wolf, S. Kornreich; Dixon, T. H.; Freymueller, J. T.
1990-01-01
The effect of the tracking network on long (greater than 100 km) GPS baseline estimates was estimated using various subsets of the global tracking network initiated by the first Central and South America (CASA Uno) experiment. It was found that best results could be obtained with a global tacking network consisting of three U.S. stations, two sites in the southwestern Pacific, and two sites in Europe. In comparison with smaller subsets, this global network improved the baseline repeatability, the resolution of carrier phase cycle ambiguities, and formal errors of the orbit estimates.
NASA Astrophysics Data System (ADS)
Werner, C. L.; Wegmuller, U.; Strozzi, T.; Wiesmann, A.
2006-12-01
Principle contributors to the noise in differential SAR interferograms are temporal phase stability of the surface, geometry relating to baseline and surface slope, and propagation path delay variations due to tropospheric water vapor and the ionosphere. Time series analysis of multiple interferograms generated from a stack of SAR SLC images seeks to determine the deformation history of the surface while reducing errors. Only those scatterers within a resolution element that are stable and coherent for each interferometric pair contribute to the desired deformation signal. Interferograms with baselines exceeding 1/3 the critical baseline have substantial geometrical decorrelation for distributed targets. Short baseline pairs with multiple reference scenes can be combined using least-squares estimation to obtain a global deformation solution. Alternately point-like persistent scatterers can be identified in scenes that do not exhibit geometrical decorrelation associated with large baselines. In this approach interferograms are formed from a stack of SAR complex images using a single reference scene. Stable distributed scatter pixels are excluded however due to the presence of large baselines. We apply both point- based and short-baseline methodologies and compare results for a stack of fine-beam Radarsat data acquired in 2002-2004 over a rapidly subsiding oil field near Lost Hills, CA. We also investigate the density of point-like scatters with respect to image resolution. The primary difficulty encountered when applying time series methods is phase unwrapping errors due to spatial and temporal gaps. Phase unwrapping requires sufficient spatial and temporal sampling. Increasing the SAR range bandwidth increases the range resolution as well as increasing the critical interferometric baseline that defines the required satellite orbital tube diameter. Sufficient spatial sampling also permits unwrapping because of the reduced phase/pixel gradient. Short time intervals further reduce the differential phase due to deformation when the deformation is continuous. Lower frequency systems (L- vs. C-Band) substantially improve the ability to unwrap the phase correctly by directly reducing both interferometric phase amplitude and temporal decorrelation.
Weiner, Saul J; Schwartz, Alan; Yudkowsky, Rachel; Schiff, Gordon D; Weaver, Frances M; Goldberg, Julie; Weiss, Kevin B
2007-01-01
Clinical decision making requires 2 distinct cognitive skills: the ability to classify patients' conditions into diagnostic and management categories that permit the application of research evidence and the ability to individualize or-more specifically-to contextualize care for patients whose circumstances and needs require variation from the standard approach to care. The purpose of this study was to develop and test a methodology for measuring physicians' performance at contextualizing care and compare it to their performance at planning biomedically appropriate care. First, the authors drafted 3 cases, each with 4 variations, 3 of which are embedded with biomedical and/or contextual information that is essential to planning care. Once the cases were validated as instruments for assessing physician performance, 54 internal medicine residents were then presented with opportunities to make these preidentified biomedical or contextual errors, and data were collected on information elicitation and error making. The case validation process was successful in that, in the final iteration, the physicians who received the contextual variant of cases proposed an alternate plan of care to those who received the baseline variant 100% of the time. The subsequent piloting of these validated cases unmasked previously unmeasured differences in physician performance at contextualizing care. The findings, which reflect the performance characteristics of the study population, are presented. This pilot study demonstrates a methodology for measuring physician performance at contextualizing care and illustrates the contribution of such information to an overall assessment of physician practice.
Ching, Joan M; Williams, Barbara L; Idemoto, Lori M; Blackmore, C Craig
2014-08-01
Virginia Mason Medical Center (Seattle) employed the Lean concept of Jidoka (automation with a human touch) to plan for and deploy bar code medication administration (BCMA) to hospitalized patients. Integrating BCMA technology into the nursing work flow with minimal disruption was accomplished using three steps ofJidoka: (1) assigning work to humans and machines on the basis of their differing abilities, (2) adapting machines to the human work flow, and (3) monitoring the human-machine interaction. Effectiveness of BCMA to both reinforce safe administration practices and reduce medication errors was measured using the Collaborative Alliance for Nursing Outcomes (CALNOC) Medication Administration Accuracy Quality Study methodology. Trained nurses observed a total of 16,149 medication doses for 3,617 patients in a three-year period. Following BCMA implementation, the number of safe practice violations decreased from 54.8 violations/100 doses (January 2010-September 2011) to 29.0 violations/100 doses (October 2011-December 2012), resulting in an absolute risk reduction of 25.8 violations/100 doses (95% confidence interval [CI]: 23.7, 27.9, p < .001). The number of medication errors decreased from 5.9 errors/100 doses at baseline to 3.0 errors/100 doses after BCMA implementation (absolute risk reduction: 2.9 errors/100 doses [95% CI: 2.2, 3.6,p < .001]). The number of unsafe administration practices (estimate, -5.481; standard error 1.133; p < .001; 95% CI: -7.702, -3.260) also decreased. As more hospitals respond to health information technology meaningful use incentives, thoughtful, methodical, and well-managed approaches to technology deployment are crucial. This work illustrates how Jidoka offers opportunities for a smooth transition to new technology.
Statistical error model for a solar electric propulsion thrust subsystem
NASA Technical Reports Server (NTRS)
Bantell, M. H.
1973-01-01
The solar electric propulsion thrust subsystem statistical error model was developed as a tool for investigating the effects of thrust subsystem parameter uncertainties on navigation accuracy. The model is currently being used to evaluate the impact of electric engine parameter uncertainties on navigation system performance for a baseline mission to Encke's Comet in the 1980s. The data given represent the next generation in statistical error modeling for low-thrust applications. Principal improvements include the representation of thrust uncertainties and random process modeling in terms of random parametric variations in the thrust vector process for a multi-engine configuration.
Calibration of the Microwave Limb Sounder on the Upper Atmosphere Research Satellite
NASA Technical Reports Server (NTRS)
Jarnot, R. F.; Cofield, R. E.; Waters, J. W.; Flower, D. A.; Peckham, G. E.
1996-01-01
The Microwave Limb Sounder (MLS) is a three-radiometer, passive, limb emission instrument onboard the Upper Atmosphere Research Satellite (UARS). Radiometric, spectral and field-of-view calibrations of the MLS instrument are described in this paper. In-orbit noise performance, gain stability, spectral baseline and dynamic range are described, as well as use of in-flight data for validation and refinement of prelaunch calibrations. Estimated systematic scaling uncertainties (3 sigma) on calibrated limb radiances from prelaunch calibrations are 2.6% in bands 1 through 3, 3.4% in band 4, and 6% in band 5. The observed systematic errors in band 6 are about 15%, consistent with prelaunch calibration uncertainties. Random uncertainties on individual limb radiance measurements are very close to the levels predicted from measured radiometer noise temperature, with negligible contribution from noise and drifts on the regular in-flight gain calibration measurements.
Recent NA61/SHINE measurements performed for the T2K experiment
NASA Astrophysics Data System (ADS)
2017-12-01
The neutrino programme of the NA61/ SHINE experiment at the CERN SPS is aiming to deliver precise hadron production measurements for improving calculations of the initial neutrino beam flux in the long-baseline neutrino oscillation experiments. The first receiver of such measurements is the T2K neutrino oscillation project in Japan. New results on π±, K±, p, K0S and Λ production from the NA61/SHINE 2009 thin target data analyses with smaller statistical and systematic errors are discussed. They enable us to reduce further the flux uncertainties in T2K for neutrino and antineutrino beam mode. We also report on the first corrected π± results obtained for T2K replica target (a 90 cm long cylinder of 2.6 cm diameter, about 1.9λI). Up to 90% of the neutrino flux can be constrained by such measurements as compared to 60% for the thin target measurements that are sensitive only to primary hadron interactions.
Beaton, Kara H.; Wong, Aaron L.; Lowen, Steven B.
2017-01-01
Individual differences in sensorimotor adaptability may permit customized training protocols for optimum learning. Here, we sought to forecast individual adaptive capabilities in the vestibulo-ocular reflex (VOR). Subjects performed 400 head-rotation steps (400 trials) during a baseline test, followed by 20 min of VOR gain adaptation. All subjects exhibited mean baseline VOR gain of approximately 1.0, variable from trial to trial, and showed desired reductions in gain following adaptation with variation in extent across individuals. The extent to which a given subject adapted was inversely proportional to a measure of the strength and duration of baseline inter-trial correlations (β). β is derived from the decay of the autocorrelation of the sequence of VOR gains, and describes how strongly correlated are past gain values; it thus indicates how much the VOR gain on any given trial is informed by performance on previous trials. To maximize the time that images are stabilized on the retina, the VOR should maintain a gain close to 1.0 that is adjusted predominantly according to the most recent error; hence, it is not surprising that individuals who exhibit smaller β (weaker inter-trial correlations) also exhibited the best adaptation. Our finding suggests that the temporal structure of baseline behavioral data contains important information that may aid in forecasting adaptive capacities. This has significant implications for the development of personalized physical therapy protocols for patients, and for other cases when it is necessary to adjust motor programs to maintain movement accuracy in response to pathological and environmental changes. PMID:28380076
NASA Astrophysics Data System (ADS)
Thamvichai, Ratchaneekorn; Huang, Liang-Chih; Ashok, Amit; Gong, Qian; Coccarelli, David; Greenberg, Joel A.; Gehm, Michael E.; Neifeld, Mark A.
2017-05-01
We employ an adaptive measurement system, based on sequential hypotheses testing (SHT) framework, for detecting material-based threats using experimental data acquired on an X-ray experimental testbed system. This testbed employs 45-degree fan-beam geometry and 15 views over a 180-degree span to generate energy sensitive X-ray projection data. Using this testbed system, we acquire multiple view projection data for 200 bags. We consider an adaptive measurement design where the X-ray projection measurements are acquired in a sequential manner and the adaptation occurs through the choice of the optimal "next" source/view system parameter. Our analysis of such an adaptive measurement design using the experimental data demonstrates a 3x-7x reduction in the probability of error relative to a static measurement design. Here the static measurement design refers to the operational system baseline that corresponds to a sequential measurement using all the available sources/views. We also show that by using adaptive measurements it is possible to reduce the number of sources/views by nearly 50% compared a system that relies on static measurements.
Impact of Educational Activities in Reducing Pre-Analytical Laboratory Errors
Al-Ghaithi, Hamed; Pathare, Anil; Al-Mamari, Sahimah; Villacrucis, Rodrigo; Fawaz, Naglaa; Alkindi, Salam
2017-01-01
Objectives Pre-analytic errors during diagnostic laboratory investigations can lead to increased patient morbidity and mortality. This study aimed to ascertain the effect of educational nursing activities on the incidence of pre-analytical errors resulting in non-conforming blood samples. Methods This study was conducted between January 2008 and December 2015. All specimens received at the Haematology Laboratory of the Sultan Qaboos University Hospital, Muscat, Oman, during this period were prospectively collected and analysed. Similar data from 2007 were collected retrospectively and used as a baseline for comparison. Non-conforming samples were defined as either clotted samples, haemolysed samples, use of the wrong anticoagulant, insufficient quantities of blood collected, incorrect/lack of labelling on a sample or lack of delivery of a sample in spite of a sample request. From 2008 onwards, multiple educational training activities directed at the hospital nursing staff and nursing students primarily responsible for blood collection were implemented on a regular basis. Results After initiating corrective measures in 2008, a progressive reduction in the percentage of non-conforming samples was observed from 2009 onwards. Despite a 127.84% increase in the total number of specimens received, there was a significant reduction in non-conforming samples from 0.29% in 2007 to 0.07% in 2015, resulting in an improvement of 75.86% (P <0.050). In particular, specimen identification errors decreased by 0.056%, with a 96.55% improvement. Conclusion Targeted educational activities directed primarily towards hospital nursing staff had a positive impact on the quality of laboratory specimens by significantly reducing pre-analytical errors. PMID:29062553
Impact of Educational Activities in Reducing Pre-Analytical Laboratory Errors: A quality initiative.
Al-Ghaithi, Hamed; Pathare, Anil; Al-Mamari, Sahimah; Villacrucis, Rodrigo; Fawaz, Naglaa; Alkindi, Salam
2017-08-01
Pre-analytic errors during diagnostic laboratory investigations can lead to increased patient morbidity and mortality. This study aimed to ascertain the effect of educational nursing activities on the incidence of pre-analytical errors resulting in non-conforming blood samples. This study was conducted between January 2008 and December 2015. All specimens received at the Haematology Laboratory of the Sultan Qaboos University Hospital, Muscat, Oman, during this period were prospectively collected and analysed. Similar data from 2007 were collected retrospectively and used as a baseline for comparison. Non-conforming samples were defined as either clotted samples, haemolysed samples, use of the wrong anticoagulant, insufficient quantities of blood collected, incorrect/lack of labelling on a sample or lack of delivery of a sample in spite of a sample request. From 2008 onwards, multiple educational training activities directed at the hospital nursing staff and nursing students primarily responsible for blood collection were implemented on a regular basis. After initiating corrective measures in 2008, a progressive reduction in the percentage of non-conforming samples was observed from 2009 onwards. Despite a 127.84% increase in the total number of specimens received, there was a significant reduction in non-conforming samples from 0.29% in 2007 to 0.07% in 2015, resulting in an improvement of 75.86% ( P <0.050). In particular, specimen identification errors decreased by 0.056%, with a 96.55% improvement. Targeted educational activities directed primarily towards hospital nursing staff had a positive impact on the quality of laboratory specimens by significantly reducing pre-analytical errors.
Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.
Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia
2017-06-01
Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.
NASA Technical Reports Server (NTRS)
Stowe, Larry; Hucek, Richard; Ardanuy, Philip; Joyce, Robert
1994-01-01
Much of the new record of broadband earth radiation budget satellite measurements to be obtained during the late 1990s and early twenty-first century will come from the dual-radiometer Clouds and Earth's Radiant Energy System Instrument (CERES-I) flown aboard sun-synchronous polar orbiters. Simulation studies conducted in this work for an early afternoon satellite orbit indicate that spatial root-mean-square (rms) sampling errors of instantaneous CERES-I shortwave flux estimates will range from about 8.5 to 14.0 W/m on a 2.5 deg latitude and longitude grid resolution. Rms errors in longwave flux estimates are only about 20% as large and range from 1.5 to 3.5 W/sq m. These results are based on an optimal cross-track scanner design that includes 50% footprint overlap to eliminate gaps in the top-of-the-atmosphere coverage, and a 'smallest' footprint size to increase the ratio in the number of observations lying within to the number of observations lying on grid area boundaries. Total instantaneous measurement error also depends on the variability of anisotropic reflectance and emission patterns and on retrieval methods used to generate target area fluxes. Three retrieval procedures from both CERES-I scanners (cross-track and rotating azimuth plane) are used. (1) The baseline Earth Radiaton Budget Experiment (ERBE) procedure, which assumes that errors due to the use of mean angular dependence models (ADMs) in the radiance-to-flux inversion process nearly cancel when averaged over grid areas. (2) To estimate N, instantaneous ADMs are estimated from the multiangular, collocated observations of the two scanners. These observed models replace the mean models in computation of satellite flux estimates. (3) The scene flux approach, conducts separate target-area retrievals for each ERBE scene category and combines their results using area weighting by scene type. The ERBE retrieval performs best when the simulated radiance field departs from the ERBE mean models by less than 10%. For larger perturbations, both the scene flux and collocation methods produce less error than the ERBE retrieval. The scene flux technique is preferable, however, because it involves fewer restrictive assumptions.
Geodesy by radio interferometry: Water vapor radiometry for estimation of the wet delay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elgered, G.; Davis, J.L.; Herring, T.A.
1991-04-10
An important source of error in very-long-baseline interferometry (VLBI) estimates of baseline length is unmodeled variations of the refractivity of the neutral atmosphere along the propagation path of the radio signals. The authors present and discuss the method of using data from a water vapor readiometer (WVR) to correct for the propagation delay caused by atmospheric water vapor, the major cause of these variations. Data from different WVRs are compared with estimated propagation delays obtained by Kalman filtering of the VLBI data themselves. The consequences of using either WVR data of Kalman filtering to correct for atmospheric propagation delay atmore » the Onsala VLBI site are investigated by studying the repeatability of estimated baseline lengths from Onsala to several other sites. The lengths of the baselines range from 919 to 7,941 km. The repeatability obtained for baseline length estimates shows that the methods of water vapor radiometry and Kalman filtering offer comparable accuracies when applied to VLBI observations obtained in the climate of the Swedish west coast. The use of WVR data yielded a 13% smaller weighted-root-mean-square (WRMS) scatter of the baseline length estimates compared to the use of a Kalman filter. It is also clear that the best minimum elevation angle for VLBI observations depends on the accuracy of the determinations of the total propagation delay to be used, since the error in this delay increases with increasing air mass. For use of WVR data along with accurate determinations of total surface pressure, the best minimum is about 20{degrees}; for use of a model for the wet delay based on the humidity and temperature at the ground, the best minimum is about 35{degrees}.« less
Retention of laparoscopic and robotic skills among medical students: a randomized controlled trial.
Orlando, Megan S; Thomaier, Lauren; Abernethy, Melinda G; Chen, Chi Chiung Grace
2017-08-01
Although simulation training beneficially contributes to traditional surgical training, there are less objective data on simulation skills retention. To investigate the retention of laparoscopic and robotic skills after simulation training. We present the second stage of a randomized single-blinded controlled trial in which 40 simulation-naïve medical students were randomly assigned to practice peg transfer tasks on either laparoscopic (N = 20, Fundamentals of Laparoscopic Surgery, Venture Technologies Inc., Waltham, MA) or robotic (N = 20, dV-Trainer, Mimic, Seattle, WA) platforms. In the first stage, two expert surgeons evaluated participants on both tasks before (Stage 1: Baseline) and immediately after training (Stage 1: Post-training) using a modified validated global rating scale of laparoscopic and robotic operative performance. In Stage 2, participants were evaluated on both tasks 11-20 weeks after training. Of the 40 students who participated in Stage 1, 23 (11 laparoscopic and 12 robotic) underwent repeat evaluation. During Stage 2, there were no significant differences between groups in objective or subjective measures for the laparoscopic task. Laparoscopic-trained participants' performances on the laparoscopic task were improved during Stage 2 compared to baseline measured by time to task completion, but not by the modified global rating scale. During the robotic task, the robotic-trained group demonstrated superior economy of motion (p = .017), Tissue Handling (p = .020), and fewer errors (p = .018) compared to the laparoscopic-trained group. Robotic skills acquisition from baseline with no significant deterioration as measured by modified global rating scale scores was observed among robotic-trained participants during Stage 2. Robotic skills acquired through simulation appear to be better maintained than laparoscopic simulation skills. This study is registered on ClinicalTrials.gov (NCT02370407).
Bédard, Anne-Claude V; Stein, Mark A; Halperin, Jeffrey M; Krone, Beth; Rajwan, Estrella; Newcorn, Jeffrey H
2015-01-01
This study examined the effects of atomoxetine (ATX) and OROS methylphenidate (MPH) on laboratory measures of inhibitory control and attention in youth with attention-deficit/hyperactivity disorder (ADHD). It was hypothesized that performance would be improved by both treatments, but response profiles would differ because the medications work via different mechanisms. One hundred and two youth (77 male; mean age = 10.5 ± 2.7 years) with ADHD received ATX (1.4 ± 0.5 mg/kg) and MPH (52.4 ± 16.6 mg) in a randomized, double-blind, crossover design. Medication was titrated in 4-6-week blocks separated by a 2-week placebo washout. Inhibitory control and attention measures were obtained at baseline, following washout, and at the end of each treatment using Conners' Continuous Performance Test II (CPT-II), which provided age-adjusted T-scores for reaction time (RT), reaction time variability (RT variability), and errors. Repeated-measures analyses of variance were performed, with Time (premedication, postmedication) and Treatment type (ATX, MPH) entered as within-subject factors. Data from the two treatment blocks were checked for order effects and combined if order effects were not present. Clinicaltrials.gov: NCT00183391. Main effects for Time on RT (p = .03), RTSD (p = .001), and omission errors (p = .01) were significant. A significant Drug × Time interaction indicated that MPH improved RT, RTSD, and omission errors more than ATX (p < .05). Changes in performance with treatment did not correlate with changes in ADHD symptoms. MPH has greater effects than ATX on CPT measures of sustained attention in youth with ADHD. However, the dissociation of cognitive and behavioral change with treatment indicates that CPT measures cannot be considered proxies for symptomatic improvement. Further research on the dissociation of cognitive and behavioral endpoints for ADHD is indicated. © 2014 The Authors. Journal of Child Psychology and Psychiatry. © 2014 Association for Child and Adolescent Mental Health.
Pickles, Andrew; Harris, Victoria; Green, Jonathan; Aldred, Catherine; McConachie, Helen; Slonims, Vicky; Le Couteur, Ann; Hudry, Kristelle; Charman, Tony
2015-02-01
The PACT randomised-controlled trial evaluated a parent-mediated communication-focused treatment for children with autism, intended to reduce symptom severity as measured by a modified Autism Diagnostic Observation Schedule-Generic (ADOS-G) algorithm score. The therapy targeted parental behaviour, with no direct interaction between therapist and child. While nonsignificant group differences were found on ADOS-G score, significant group differences were found for both parent and child intermediate outcomes. This study aimed to better understand the mechanism by which the PACT treatment influenced changes in child behaviour though the targeted parent behaviour. Mediation analysis was used to assess the direct and indirect effects of treatment via parent behaviour on child behaviour and via child behaviour on ADOS-G score. Alternative mediation was explored to study whether the treatment effect acted as hypothesised or via another plausible pathway. Mediation models typically assume no unobserved confounding between mediator and outcome and no measurement error in the mediator. We show how to better exploit the information often available within a trial to begin to address these issues, examining scope for instrumental variable and measurement error models. Estimates of mediation changed substantially when account was taken of the confounder effects of the baseline value of the mediator and of measurement error. Our best estimates that accounted for both suggested that the treatment effect on the ADOS-G score was very substantially mediated by parent synchrony and child initiations. The results highlighted the value of repeated measurement of mediators during trials. The theoretical model underlying the PACT treatment was supported. However, the substantial fall-off in treatment effect highlighted both the need for additional data and for additional target behaviours for therapy. © 2014 The Authors. Journal of Child Psychology and Psychiatry. © 2014 Association for Child and Adolescent Mental Health.
Merchant-Borna, Kian; Jones, Courtney Marie Cora; Janigro, Mattia; Wasserman, Erin B; Clark, Ross A; Bazarian, Jeffrey J
2017-03-01
Recent changes to postconcussion guidelines indicate that postural-stability assessment may augment traditional neurocognitive testing when making return-to-participation decisions. The Balance Error Scoring System (BESS) has been proposed as 1 measure of balance assessment. A new, freely available software program to accompany the Nintendo Wii Balance Board (WBB) system has recently been developed but has not been tested in concussed patients. To evaluate the feasibility of using the WBB to assess postural stability across 3 time points (baseline and postconcussion days 3 and 7) and to assess concurrent and convergent validity of the WBB with other traditional measures (BESS and Immediate Post-Concussion Assessment and Cognitive Test [ImPACT] battery) of assessing concussion recovery. Cohort study. Athletic training room and collegiate sports arena. We collected preseason baseline data from 403 National Collegiate Athletic Association Division I and III student-athletes participating in contact sports and studied 19 participants (age = 19.2 ± 1.2 years, height = 177.7 ± 8.0 cm, mass = 75.3 ± 16.6 kg, time from baseline to day 3 postconcussion = 27.1 ± 36.6 weeks) who sustained concussions. We assessed balance using single-legged and double-legged stances for both the BESS and WBB, focusing on the double-legged, eyes-closed stance for the WBB, and used ImPACT to assess neurocognition at 3 time points. Descriptive statistics were used to characterize the sample. Mean differences and Spearman rank correlation coefficients were used to determine differences within and between metrics over the 3 time points. Individual-level changes over time were also assessed graphically. The WBB demonstrated mean changes between baseline and day 3 postconcussion and between days 3 and 7 postconcussion. It was correlated with the BESS and ImPACT for several measures and identified 2 cases of abnormal balance postconcussion that would not have been identified via the BESS. When accompanied by the appropriate analytic software, the WBB may be an alternative for assessing postural stability in concussed student-athletes and may provide additional information to that obtained via the BESS and ImPACT. However, verification among independent samples is required.
Hodkinson, Duncan J; Krause, Kristina; Khawaja, Nadine; Renton, Tara F; Huggins, John P; Vennart, William; Thacker, Michael A; Mehta, Mitul A; Zelaya, Fernando O; Williams, Steven C R; Howard, Matthew A
2013-01-01
Arterial spin labelling (ASL) is increasingly being applied to study the cerebral response to pain in both experimental human models and patients with persistent pain. Despite its advantages, scanning time and reliability remain important issues in the clinical applicability of ASL. Here we present the test-retest analysis of concurrent pseudo-continuous ASL (pCASL) and visual analogue scale (VAS), in a clinical model of on-going pain following third molar extraction (TME). Using ICC performance measures, we were able to quantify the reliability of the post-surgical pain state and ΔCBF (change in CBF), both at the group and individual case level. Within-subject, the inter- and intra-session reliability of the post-surgical pain state was ranked good-to-excellent (ICC > 0.6) across both pCASL and VAS modalities. The parameter ΔCBF (change in CBF between pre- and post-surgical states) performed reliably (ICC > 0.4), provided that a single baseline condition (or the mean of more than one baseline) was used for subtraction. Between-subjects, the pCASL measurements in the post-surgical pain state and ΔCBF were both characterised as reliable (ICC > 0.4). However, the subjective VAS pain ratings demonstrated a significant contribution of pain state variability, which suggests diminished utility for interindividual comparisons. These analyses indicate that the pCASL imaging technique has considerable potential for the comparison of within- and between-subjects differences associated with pain-induced state changes and baseline differences in regional CBF. They also suggest that differences in baseline perfusion and functional lateralisation characteristics may play an important role in the overall reliability of the estimated changes in CBF. Repeated measures designs have the important advantage that they provide good reliability for comparing condition effects because all sources of variability between subjects are excluded from the experimental error. The ability to elicit reliable neural correlates of on-going pain using quantitative perfusion imaging may help support the conclusions derived from subjective self-report.
Comparison of Predictive Modeling Methods of Aircraft Landing Speed
NASA Technical Reports Server (NTRS)
Diallo, Ousmane H.
2012-01-01
Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.
Feeney, Joanne; Savva, George M; O'Regan, Claire; King-Kallimanis, Bellinda; Cronin, Hilary; Kenny, Rose Anne
2016-05-31
Knowing the reliability of cognitive tests, particularly those commonly used in clinical practice, is important in order to interpret the clinical significance of a change in performance or a low score on a single test. To report the intra-class correlation (ICC), standard error of measurement (SEM) and minimum detectable change (MDC) for the Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), and Color Trails Test (CTT) among community dwelling older adults. 130 participants aged 55 and older without severe cognitive impairment underwent two cognitive assessments between two and four months apart. Half the group changed rater between assessments and half changed time of day. Mean (standard deviation) MMSE was 28.1 (2.1) at baseline and 28.4 (2.1) at repeat. Mean (SD) MoCA increased from 24.8 (3.6) to 25.2 (3.6). There was a rater effect on CTT, but not on the MMSE or MoCA. The SEM of the MMSE was 1.0, leading to an MDC (based on a 95% confidence interval) of 3 points. The SEM of the MoCA was 1.5, implying an MDC95 of 4 points. MoCA (ICC = 0.81) was more reliable than MMSE (ICC = 0.75), but all tests examined showed substantial within-patient variation. An individual's score would have to change by greater than or equal to 3 points on the MMSE and 4 points on the MoCA for the rater to be confident that the change was not due to measurement error. This has important implications for epidemiologists and clinicians in dementia screening and diagnosis.
Bron, Tannetje I; Bijlenga, Denise; Boonstra, A Marije; Breuk, Minda; Pardoen, Willem F H; Beekman, Aartjan T F; Kooij, J J Sandra
2014-04-01
Attention-deficit/hyperactivity disorder (ADHD) is linked to impaired executive functioning (EF). This is the first study to objectively investigate the effects of a long-acting methylphenidate on neurocognitive test performance of adults with ADHD. Twenty-two adults with ADHD participated in a 6-weeks study examining the effect of osmotic-release oral system methylphenidate (OROS-mph) on continuous performance tests (CPTs; objective measures), and on the self-reported ADHD rating scale (subjective measure) using a randomized, double-blind, placebo-controlled cross-over design. OROS-mph significantly improved reaction time variability (RTV), commission errors (CE) and d-prime (DP) as compared to baseline (Cohen's d>.50), but did not affect hit reaction time (HRT) or omission errors (OE). Compared to placebo, OROS-mph only significantly influenced RTV on one of two CPTs (p<.050). Linear regression analyses showed predictive ability of more beneficial OROS-mph effects in ADHD patients with higher EF severity (RTV: β=.670, t=2.097, p=.042; omission errors (OE): β=-.098, t=-4.759, p<.001), and with more severe ADHD symptoms (RTV: F=6.363, p=.019; HRT: F=3.914, p=.061). Side effects rates were substantially but non-significantly greater for OROS-mph compared to placebo (77% vs. 46%, p=.063). OROS-mph effects indicated RTV as the most sensitive parameter for measuring both neuropsychological and behavioral deficits in adults with ADHD. These findings suggest RTV as an endophenotypic parameter for ADHD symptomatology, and propose CPTs as an objective method for monitoring methylphenidate titration. Copyright © 2014 Elsevier B.V. and ECNP. All rights reserved.
Improved Stratospheric Temperature Retrievals for Climate Reanalysis
NASA Technical Reports Server (NTRS)
Rokke, L.; Joiner, J.
1999-01-01
The Data Assimilation Office (DAO) is embarking on plans to generate a twenty year reanalysis data set of climatic atmospheric variables. One of the focus points will be in the evaluation of the dynamics of the stratosphere. The Stratospheric Sounding Unit (SSU), flown as part of the TIROS Operational Vertical Sounder (TOVS), is one of the primary stratospheric temperature sensors flown consistently throughout the reanalysis period. Seven unique sensors made the measurements over time, with individual instrument characteristics that need to be addressed. The stratospheric temperatures being assimilated across satellite platforms will profoundly impact the reanalysis dynamical fields. To attempt to quantify aspects of instrument and retrieval bias we are carefully collecting and analyzing all available information on the sensors, their instrument anomalies, forward model errors and retrieval biases. For the retrieval of stratospheric temperatures, we adapted the minimum variance approach of Jazwinski (1970) and Rodgers (1976) and applied it to the SSU soundings. In our algorithm, the state vector contains an initial guess of temperature from a model six hour forecast provided by the Goddard EOS Data Assimilation System (GEOS/DAS). This is combined with an a priori covariance matrix, a forward model parameterization, and specifications of instrument noise characteristics. A quasi-Newtonian iteration is used to obtain convergence of the retrieved state to the measurement vector. This algorithm also enables us to analyze and address the systematic errors associated with the unique characteristics of the cell pressures on the individual SSU instruments and the resolving power of the instruments to vertical gradients in the stratosphere. The preliminary results of the improved retrievals and their assimilation as well as baseline calculations of bias and rms error between the NESDIS operational product and col-located ground measurements will be presented.
Merchant-Borna, Kian; Jones, Courtney Marie Cora; Janigro, Mattia; Wasserman, Erin B.; Clark, Ross A.; Bazarian, Jeffrey J.
2017-01-01
Context: Recent changes to postconcussion guidelines indicate that postural-stability assessment may augment traditional neurocognitive testing when making return-to-participation decisions. The Balance Error Scoring System (BESS) has been proposed as 1 measure of balance assessment. A new, freely available software program to accompany the Nintendo Wii Balance Board (WBB) system has recently been developed but has not been tested in concussed patients. Objective: To evaluate the feasibility of using the WBB to assess postural stability across 3 time points (baseline and postconcussion days 3 and 7) and to assess concurrent and convergent validity of the WBB with other traditional measures (BESS and Immediate Post-Concussion Assessment and Cognitive Test [ImPACT] battery) of assessing concussion recovery. Design: Cohort study. Setting: Athletic training room and collegiate sports arena. Patients or Other Participants: We collected preseason baseline data from 403 National Collegiate Athletic Association Division I and III student-athletes participating in contact sports and studied 19 participants (age = 19.2 ± 1.2 years, height = 177.7 ± 8.0 cm, mass = 75.3 ± 16.6 kg, time from baseline to day 3 postconcussion = 27.1 ± 36.6 weeks) who sustained concussions. Main Outcome Measure(s): We assessed balance using single-legged and double-legged stances for both the BESS and WBB, focusing on the double-legged, eyes-closed stance for the WBB, and used ImPACT to assess neurocognition at 3 time points. Descriptive statistics were used to characterize the sample. Mean differences and Spearman rank correlation coefficients were used to determine differences within and between metrics over the 3 time points. Individual-level changes over time were also assessed graphically. Results: The WBB demonstrated mean changes between baseline and day 3 postconcussion and between days 3 and 7 postconcussion. It was correlated with the BESS and ImPACT for several measures and identified 2 cases of abnormal balance postconcussion that would not have been identified via the BESS. Conclusions: When accompanied by the appropriate analytic software, the WBB may be an alternative for assessing postural stability in concussed student-athletes and may provide additional information to that obtained via the BESS and ImPACT. However, verification among independent samples is required. PMID:28387551
Sefi-Yurdakul, Nazife; Kaykısız, Hüseyin; Koç, Feray
2018-03-17
To investigate the effects of partial and full correction of refractive errors on sensorial and motor outcomes in children with refractive accommodative esotropia (RAE). The records of pediatric cases with full RAE were reviewed; their first and last sensorial and motor findings were evaluated in two groups, classified as partial (Group 1) and full correction (Group 2) of refractive errors. The mean age at first admission was 5.84 ± 3.62 years in Group 1 (n = 35) and 6.35 ± 3.26 years in Group 2 (n = 46) (p = 0.335). Mean change in best corrected visual acuity (BCVA) was 0.24 ± 0.17 logarithm of the minimum angle of resolution (logMAR) in Group 1 and 0.13 ± 0.16 logMAR in Group 2 (p = 0.001). Duration of deviation, baseline refraction and amount of reduced refraction showed significant effects on change in BCVA (p < 0.05). Significant correlation was determined between binocular vision (BOV), duration of deviation and uncorrected baseline amount of deviation (p < 0.05). The baseline BOV rates were significantly high in fully corrected Group 2, and also were found to have increased in Group 1 (p < 0.05). Change in refraction was - 0.09 ± 1.08 and + 0.35 ± 0.76 diopters in Groups 1 and 2, respectively (p = 0.005). Duration of deviation, baseline refraction and the amount of reduced refraction had significant effects on change in refraction (p < 0.05). Change in deviation without refractive correction was - 0.74 ± 7.22 prism diopters in Group 1 and - 3.24 ± 10.41 prism diopters in Group 2 (p = 0.472). Duration of follow-up and uncorrected baseline deviation showed significant effects on change in deviation (p < 0.05). Although the BOV rates and BCVA were initially high in fully corrected patients, they finally improved significantly in both the fully and partially corrected patients. Full hypermetropic correction may also cause an increase in the refractive error with a possible negative effect on emmetropization. The negative effect of the duration of deviation on BOV and BCVA demonstrates the significance of early treatment in RAE cases.
Lee, Yueh-Chang; Wang, Jen-Hung; Chiu, Cheng-Jen
2017-12-08
Several studies reported the efficacy of orthokeratology for myopia control. Somehow, there is limited publication with follow-up longer than 3 years. This study aims to research whether overnight orthokeratology influences the progression rate of the manifest refractive error of myopic children in a longer follow-up period (up to 12 years). And if changes in progression rate are found, to investigate the relationship between refractive changes and different baseline factors, including refraction error, wearing age and lens replacement frequency. In addition, this study collects long-term safety profile of overnight orthokeratology. This is a retrospective study of sixty-six school-age children who received overnight orthokeratology correction between January 1998 and December 2013. Thirty-six subjects whose baseline age and refractive error matched with those in the orthokeratology group were selected to form control group. These subjects were followed up at least for 12 months. Manifest refractions, cycloplegic refractions, uncorrected and best-corrected visual acuities, power vector of astigmatism, corneal curvature, and lens replacement frequency were obtained for analysis. Data of 203 eyes were derived from 66 orthokeratology subjects (31 males and 35 females) and 36 control subjects (22 males and 14 females) enrolled in this study. Their wearing ages ranged from 7 years to 16 years (mean ± SE, 11.72 ± 0.18 years). The follow-up time ranged from 1 year to 13 years (mean ± SE, 6.32 ± 0.15 years). At baseline, their myopia ranged from -0.5 D to -8.0 D (mean ± SE, -3.70 ± 0.12 D), and astigmatism ranged from 0 D to -3.0 D (mean ± SE, -0.55 ± 0.05 D). Comparing with control group, orthokeratology group had a significantly (p < 0.001) lower trend of refractive error change during the follow-up periods. According to the analysis results of GEE model, greater power of astigmatism was found to be associated with increased change of refractive error during follow-up years. Overnight orthokeratology was effective in slowing myopia progression over a twelve-year follow-up period and demonstrated a clinically acceptable safety profile. Initial higher astigmatism power was found to be associated with increased change of refractive error during follow-up years.
Longterm follow-up in European respiratory health studies – patterns and implications
2014-01-01
Background Selection bias is a systematic error in epidemiologic studies that may seriously distort true measures of associations between exposure and disease. Observational studies are highly susceptible to selection bias, and researchers should therefore always examine to what extent selection bias may be present in their material and what characterizes the bias in their material. In the present study we examined long-term participation and consequences of loss to follow-up in the studies Respiratory Health in Northern Europe (RHINE), Italian centers of European Community Respiratory Health Survey (I-ECRHS), and the Italian Study on Asthma in Young Adults (ISAYA). Methods Logistic regression identified predictors for follow-up participation. Baseline prevalence of 9 respiratory symptoms (asthma attack, asthma medication, combined variable with asthma attack and/or asthma medication, wheeze, rhinitis, wheeze with dyspnea, wheeze without cold, waking with chest tightness, waking with dyspnea) and 9 exposure-outcome associations (predictors sex, age and smoking; outcomes wheeze, asthma and rhinitis) were compared between all baseline participants and long-term participants. Bias was measured as ratios of relative frequencies and ratios of odds ratios (ROR). Results Follow-up response rates after 10 years were 75% in RHINE, 64% in I-ECRHS and 53% in ISAYA. After 20 years of follow-up, response was 53% in RHINE and 49% in I-ECRHS. Female sex predicted long-term participation (in RHINE OR (95% CI) 1.30(1.22, 1.38); in I-ECRHS 1.29 (1.11, 1.50); and in ISAYA 1.42 (1.25, 1.61)), as did increasing age. Baseline prevalence of respiratory symptoms were lower among long-term participants (relative deviations compared to total baseline population 0-15% (RHINE), 0-48% (I-ECRHS), 3-20% (ISAYA)), except rhinitis which had a slightly higher prevalence. Most exposure-outcome associations did not differ between long-term participants and all baseline participants, except lower OR for rhinitis among ISAYA long-term participating smokers (relative deviation 17% (smokers) and 44% (10–20 pack years)). Conclusions We found comparable patterns of long-term participation and loss to follow-up in RHINE, I-ECRHS and ISAYA. Baseline prevalence estimates for long-term participants were slightly lower than for the total baseline population, while exposure-outcome associations were mainly unchanged by loss to follow-up. PMID:24739530
Modeling Pumped Thermal Energy Storage with Waste Heat Harvesting
NASA Astrophysics Data System (ADS)
Abarr, Miles L. Lindsey
This work introduces a new concept for a utility scale combined energy storage and generation system. The proposed design utilizes a pumped thermal energy storage (PTES) system, which also utilizes waste heat leaving a natural gas peaker plant. This system creates a low cost utility-scale energy storage system by leveraging this dual-functionality. This dissertation first presents a review of previous work in PTES as well as the details of the proposed integrated bottoming and energy storage system. A time-domain system model was developed in Mathworks R2016a Simscape and Simulink software to analyze this system. Validation of both the fluid state model and the thermal energy storage model are provided. The experimental results showed the average error in cumulative fluid energy between simulation and measurement was +/- 0.3% per hour. Comparison to a Finite Element Analysis (FEA) model showed <1% error for bottoming mode heat transfer. The system model was used to conduct sensitivity analysis, baseline performance, and levelized cost of energy of a recently proposed Pumped Thermal Energy Storage and Bottoming System (Bot-PTES) that uses ammonia as the working fluid. This analysis focused on the effects of hot thermal storage utilization, system pressure, and evaporator/condenser size on the system performance. This work presents the estimated performance for a proposed baseline Bot-PTES. Results of this analysis showed that all selected parameters had significant effects on efficiency, with the evaporator/condenser size having the largest effect over the selected ranges. Results for the baseline case showed stand-alone energy storage efficiencies between 51 and 66% for varying power levels and charge states, and a stand-alone bottoming efficiency of 24%. The resulting efficiencies for this case were low compared to competing technologies; however, the dual-functionality of the Bot-PTES enables it to have higher capacity factor, leading to 91-197/MWh levelized cost of energy compared to 262-284/MWh for batteries and $172-254/MWh for Compressed Air Energy Storage.
Hu, Yin; Niu, Yong; Wang, Dandan; Wang, Ying; Holden, Brien A; He, Mingguang
2015-01-22
Structural changes of retinal vasculature, such as altered retinal vascular calibers, are considered as early signs of systemic vascular damage. We examined the associations of 5-year mean level, longitudinal trend, and fluctuation in fasting plasma glucose (FPG) with retinal vascular caliber in people without established diabetes. A prospective study was conducted in a cohort of Chinese people age ≥40 years in Guangzhou, southern China. The FPG was measured at baseline in 2008 and annually until 2012. In 2012, retinal vascular caliber was assessed using standard fundus photographs and validated software. A total of 3645 baseline nondiabetic participants with baseline and follow-up data on FPG for 3 or more visits was included for statistical analysis. The associations of retinal vascular caliber with 5-year mean FPG level, longitudinal FPG trend (slope of linear regression-FPG), and fluctuation (standard deviation and root mean square error of FPG) were analyzed using multivariable linear regression analyses. Multivariate regression models adjusted for baseline FPG and other potential confounders showed that a 10% annual increase in FPG was associated independently with a 2.65-μm narrowing in retinal arterioles (P = 0.008) and a 3.47-μm widening in venules (P = 0. 0.004). Associations with mean FPG level and fluctuation were not statistically significant. Annual rising trend in FPG, but not its mean level or fluctuation, is associated with altered retinal vasculature in nondiabetic people. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.
Samanipour, Saer; Dimitriou-Christidis, Petros; Gros, Jonas; Grange, Aureline; Samuel Arey, J
2015-01-02
Comprehensive two-dimensional gas chromatography (GC×GC) is used widely to separate and measure organic chemicals in complex mixtures. However, approaches to quantify analytes in real, complex samples have not been critically assessed. We quantified 7 PAHs in a certified diesel fuel using GC×GC coupled to flame ionization detector (FID), and we quantified 11 target chlorinated hydrocarbons in a lake water extract using GC×GC with electron capture detector (μECD), further confirmed qualitatively by GC×GC with electron capture negative chemical ionization time-of-flight mass spectrometer (ENCI-TOFMS). Target analyte peak volumes were determined using several existing baseline correction algorithms and peak delineation algorithms. Analyte quantifications were conducted using external standards and also using standard additions, enabling us to diagnose matrix effects. We then applied several chemometric tests to these data. We find that the choice of baseline correction algorithm and peak delineation algorithm strongly influence the reproducibility of analyte signal, error of the calibration offset, proportionality of integrated signal response, and accuracy of quantifications. Additionally, the choice of baseline correction and the peak delineation algorithm are essential for correctly discriminating analyte signal from unresolved complex mixture signal, and this is the chief consideration for controlling matrix effects during quantification. The diagnostic approaches presented here provide guidance for analyte quantification using GC×GC. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Assessment of local GNSS baselines at co-location sites
NASA Astrophysics Data System (ADS)
Herrera Pinzón, Iván; Rothacher, Markus
2018-01-01
As one of the major contributors to the realisation of the International Terrestrial Reference System (ITRS), the Global Navigation Satellite Systems (GNSS) are prone to suffer from irregularities and discontinuities in time series. While often associated with hardware/software changes and the influence of the local environment, these discrepancies constitute a major threat for ITRS realisations. Co-located GNSS at fundamental sites, with two or more available instruments, provide the opportunity to mitigate their influence while improving the accuracy of estimated positions by examining data breaks, local biases, deformations, time-dependent variations and the comparison of GNSS baselines with existing local tie measurements. With the use of co-located GNSS data from a subset sites of the International GNSS Service network, this paper discusses a global multi-year analysis with the aim of delivering homogeneous time series of coordinates to analyse system-specific error sources in the local baselines. Results based on the comparison of different GNSS-based solutions with the local survey ties show discrepancies of up to 10 mm despite GNSS coordinate repeatabilities at the sub-mm level. The discrepancies are especially large for the solutions using the ionosphere-free linear combination and estimating tropospheric zenith delays, thus corresponding to the processing strategy used for global solutions. Snow on the antennas causes further problems and seasonal variations of the station coordinates. These demonstrate the need for a permanent high-quality monitoring of the effects present in the short GNSS baselines at fundamental sites.
Shear flow control of cold and heated rectangular jets by mechanical tabs. Volume 2: Tabulated data
NASA Technical Reports Server (NTRS)
Brown, W. H.; Ahuja, K. K.
1989-01-01
The effects of mechanical protrusions on the jet mixing characteristics of rectangular nozzles for heated and unheated subsonic and supersonic jet plumes were studied. The characteristics of a rectangular nozzle of aspect ratio 4 without the mechanical protrusions were first investigated. Intrusive probes were used to make the flow measurements. Possible errors introduced by intrusive probes in making shear flow measurements were also examined. Several scaled sizes of mechanical tabs were then tested, configured around the perimeter of the rectangular jet. Both the number and the location of the tabs were varied. From this, the best configuration was selected. This volume contains tabulated data for each of the data runs cited in Volume 1. Baseline characteristics, mixing modifications (subsonic and supersonic, heated and unheated) and miscellaneous charts are included.
Hall, Deborah A; Mehta, Rajnikant L; Fackrell, Kathryn
2017-09-18
Loudness is a major auditory dimension of tinnitus and is used to diagnose severity, counsel patients, or as a measure of clinical efficacy in audiological research. There is no standard test for tinnitus loudness, but matching and rating methods are popular. This article provides important new knowledge about the reliability and validity of an audiologist-administered tinnitus loudness matching test and a patient-reported tinnitus loudness rating. Retrospective analysis of loudness data for 91 participants with stable subjective tinnitus enrolled in a randomized controlled trial of a novel drug for tinnitus. There were two baseline assessments (screening, Day 1) and a posttreatment assessment (Day 28). About 66%-70% of the variability from screening to Day 1 was attributable to the true score. But measurement error, indicated by the smallest detectable change, was high for both tinnitus loudness matching (20 dB) and tinnitus loudness rating (3.5 units). Only loudness rating captured a sensation that was meaningful to people who lived with the experience of tinnitus. The tinnitus loudness rating performed better against acceptability criteria for reliability and validity than did the tinnitus loudness matching test administered by an audiologist. But the rating question is still limited because it is a single-item instrument and is probably able to detect only large changes (at least 3.5 points).
Rate-gyro-integral constraint for ambiguity resolution in GNSS attitude determination applications.
Zhu, Jiancheng; Li, Tao; Wang, Jinling; Hu, Xiaoping; Wu, Meiping
2013-06-21
In the field of Global Navigation Satellite System (GNSS) attitude determination, the constraints usually play a critical role in resolving the unknown ambiguities quickly and correctly. Many constraints such as the baseline length, the geometry of multi-baselines and the horizontal attitude angles have been used extensively to improve the performance of ambiguity resolution. In the GNSS/Inertial Navigation System (INS) integrated attitude determination systems using low grade Inertial Measurement Unit (IMU), the initial heading parameters of the vehicle are usually worked out by the GNSS subsystem instead of by the IMU sensors independently. However, when a rotation occurs, the angle at which vehicle has turned within a short time span can be measured accurately by the IMU. This measurement will be treated as a constraint, namely the rate-gyro-integral constraint, which can aid the GNSS ambiguity resolution. We will use this constraint to filter the candidates in the ambiguity search stage. The ambiguity search space shrinks significantly with this constraint imposed during the rotation, thus it is helpful to speeding up the initialization of attitude parameters under dynamic circumstances. This paper will only study the applications of this new constraint to land vehicles. The impacts of measurement errors on the effect of this new constraint will be assessed for different grades of IMU and current average precision level of GNSS receivers. Simulations and experiments in urban areas have demonstrated the validity and efficacy of the new constraint in aiding GNSS attitude determinations.
Adequacy of selected evapotranspiration approximations for hydrologic simulation
Sumner, D.M.
2006-01-01
Evapotranspiration (ET) approximations, usually based on computed potential ET (PET) and diverse PET-to-ET conceptualizations, are routinely used in hydrologic analyses. This study presents an approach to incorporate measured (actual) ET data, increasingly available using micrometeorological methods, to define the adequacy of ET approximations for hydrologic simulation. The approach is demonstrated at a site where eddy correlation-measured ET values were available. A baseline hydrologic model incorporating measured ET values was used to evaluate the sensitivity of simulated water levels, subsurface recharge, and surface runoff to error in four ET approximations. An annually invariant pattern of mean monthly vegetation coefficients was shown to be most effective, despite the substantial year-to-year variation in measured vegetation coefficients. The temporal variability of available water (precipitation minus ET) at the humid, subtropical site was largely controlled by the relatively high temporal variability of precipitation, benefiting the effectiveness of coarse ET approximations, a result that is likely to prevail at other humid sites.
NASA Astrophysics Data System (ADS)
Ducret, Gabriel; Doin, Marie-Pierre; Lasserre, Cécile; Guillaso, Stéphane; Twardzik, Cedric
2010-05-01
In order to increase our knowledge on the lithosphere rheological structure under the Tibetan plateau, we study the loading response due to lake Siling Co water level changes. The challenge here is to measure the deformation with an accuracy good enough to obtain a correct sensivity to model parameters. InSAR method in theory allow to observe the spatio-temporal pattern of deformation, however its exploitation is limited by unwrapping difficulties linked with temporal decorrelation and DEM errors in sloppy and partially incoherent areas. This lake is a large endhoreic lake at 4500~m elevation located North of the strike-slip right lateral Gyaring Co fault, and just to the south of the Bangong Nujiang suture zone, on which numerous left-lateral strike slip are branching. The Siling Co lake water level has strongly changed in the past, as testified by numerous traces of palaeo-shorelines, clearly marked until 60 m above present-day level. In the last years, the water level in this lake increased by about 1~m/yr, a remarkably fast rate given the large lake surface (1600~km2). The present-day ground subsidence associated to the water level increase is studied by InSAR using all ERS and Envisat archived data on track 219, obtained through the Dragon cooperation program. We chose to compute 750~km long differential interferograms centered on the lake to provide a good constraint on the reference. A redundant network of small baseline interferograms is computed with perpendicular baseline smaller than 500~m. The coherence is quickly lost with time (over one year), particularly to the North of the lake because of freeze-thaw cycles. Unwrapping thus becomes hazardous in this configuration, and fails on phase jumps created by DEM contrasts. The first work is to improve the simulated elevation field in radar geometry from the Digital Elevation Model (here SRTM) in order to exploit the interferometric phase in layover areas. Then, to estimate DEM error, we mix the Permanent Scattered and Small Baseline methods. The aim is to improve spatial and temporal coherence. We use as a reference strong and stable amplitude points or spatially coherent areas, scattered within the SAR scene. We calculate the relative elevation error of every point in the neighbourhood of reference points. A global inversion allows to perform spatial integration of local errors at the radar image scale. Finally, we evaluate how the DEM correct ion of wrapped interferograms improves the unwrapping step. Furthermore, to help unwrapping we also compute and then remove from the wrapped interferograms the residual orbital trend and the phase-elevation relationship due variations in atmospheric stratification. Stack of unwrapped small baseline interferograms show clearly the average subsidence rate around the lake of about 4 mm/yr associated to the present-day water level increase. To compare the observed deformation to the water level elevation changes, we extract from satellite images in the period 1972 to 2009 the water level changes. The deformation signal is discussed in terms of end-members visco-elastic models of the lithosphere and uppermost mantle.
Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping
2011-04-01
In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Seo, Byoung-Joon; Nissly, Carl; Troy, Mitchell; Angeli, George
2010-01-01
The Normalized Point Source Sensitivity (PSSN) has previously been defined and analyzed as an On-Axis seeing-limited telescope performance metric. In this paper, we expand the scope of the PSSN definition to include Off-Axis field of view (FoV) points and apply this generalized metric for performance evaluation of the Thirty Meter Telescope (TMT). We first propose various possible choices for the PSSN definition and select one as our baseline. We show that our baseline metric has useful properties including the multiplicative feature even when considering Off-Axis FoV points, which has proven to be useful for optimizing the telescope error budget. Various TMT optical errors are considered for the performance evaluation including segment alignment and phasing, segment surface figures, temperature, and gravity, whose On-Axis PSSN values have previously been published by our group.
Tropospheric Delay Raytracing Applied in VLBI Analysis
NASA Astrophysics Data System (ADS)
MacMillan, D. S.; Eriksson, D.; Gipson, J. M.
2013-12-01
Tropospheric delay modeling error continues to be one of the largest sources of error in VLBI analysis. For standard operational solutions, we use the VMF1 elevation-dependent mapping functions derived from ECMWF data. These mapping functions assume that tropospheric delay at a site is azimuthally symmetric. As this assumption does not reflect reality, we have determined the raytrace delay along the signal path through the troposphere for each VLBI quasar observation. We determined the troposphere refractivity fields from the pressure, temperature, specific humidity and geopotential height fields of the NASA GSFC GEOS-5 numerical weather model. We discuss results from analysis of the CONT11 R&D and the weekly operational R1+R4 experiment sessions. When applied in VLBI analysis, baseline length repeatabilities were better for 66-72% of baselines with raytraced delays than with VMF1 mapping functions. Vertical repeatabilities were better for 65% of sites.
An Ensemble Method for Spelling Correction in Consumer Health Questions
Kilicoglu, Halil; Fiszman, Marcelo; Roberts, Kirk; Demner-Fushman, Dina
2015-01-01
Orthographic and grammatical errors are a common feature of informal texts written by lay people. Health-related questions asked by consumers are a case in point. Automatic interpretation of consumer health questions is hampered by such errors. In this paper, we propose a method that combines techniques based on edit distance and frequency counts with a contextual similarity-based method for detecting and correcting orthographic errors, including misspellings, word breaks, and punctuation errors. We evaluate our method on a set of spell-corrected questions extracted from the NLM collection of consumer health questions. Our method achieves a F1 score of 0.61, compared to an informed baseline of 0.29, achieved using ESpell, a spelling correction system developed for biomedical queries. Our results show that orthographic similarity is most relevant in spelling error correction in consumer health questions and that frequency and contextual information are complementary to orthographic features. PMID:26958208
NASA Astrophysics Data System (ADS)
Juanola-Parramon, Roser; Zimmerman, Neil; Bolcar, Matthew R.; Rizzo, Maxime; Roberge, Aki
2018-01-01
The Coronagraph is a key instrument on the Large UV-Optical-Infrared (LUVOIR) Surveyor mission concept. The Apodized Pupil Lyot Coronagraph (APLC) is one of the baselined mask technologies to enable 1E10 contrast observations in the habitable zones of nearby stars. Both the LUVOIR architectures A and B present a segmented aperture as input pupil, introducing a set of random tip/tilt and piston errors, among others, that greatly affect the performance of the coronagraph instrument by increasing the wavefront errors hence reducing the instrument sensitivity. In this poster we present the latest results of the simulation of these effects for different working angle regions and discuss the achieved contrast for exoplanet detection and characterization, including simulated observations under these circumstances, setting boundaries for the tolerance of such errors.
Gated integrator with signal baseline subtraction
Wang, X.
1996-12-17
An ultrafast, high precision gated integrator includes an opamp having differential inputs. A signal to be integrated is applied to one of the differential inputs through a first input network, and a signal indicative of the DC offset component of the signal to be integrated is applied to the other of the differential inputs through a second input network. A pair of electronic switches in the first and second input networks define an integrating period when they are closed. The first and second input networks are substantially symmetrically constructed of matched components so that error components introduced by the electronic switches appear symmetrically in both input circuits and, hence, are nullified by the common mode rejection of the integrating opamp. The signal indicative of the DC offset component is provided by a sample and hold circuit actuated as the integrating period begins. The symmetrical configuration of the integrating circuit improves accuracy and speed by balancing out common mode errors, by permitting the use of high speed switching elements and high speed opamps and by permitting the use of a small integrating time constant. The sample and hold circuit substantially eliminates the error caused by the input signal baseline offset during a single integrating window. 5 figs.
Gated integrator with signal baseline subtraction
Wang, Xucheng
1996-01-01
An ultrafast, high precision gated integrator includes an opamp having differential inputs. A signal to be integrated is applied to one of the differential inputs through a first input network, and a signal indicative of the DC offset component of the signal to be integrated is applied to the other of the differential inputs through a second input network. A pair of electronic switches in the first and second input networks define an integrating period when they are closed. The first and second input networks are substantially symmetrically constructed of matched components so that error components introduced by the electronic switches appear symmetrically in both input circuits and, hence, are nullified by the common mode rejection of the integrating opamp. The signal indicative of the DC offset component is provided by a sample and hold circuit actuated as the integrating period begins. The symmetrical configuration of the integrating circuit improves accuracy and speed by balancing out common mode errors, by permitting the use of high speed switching elements and high speed opamps and by permitting the use of a small integrating time constant. The sample and hold circuit substantially eliminates the error caused by the input signal baseline offset during a single integrating window.
Real-Time Single-Frequency GPS/MEMS-IMU Attitude Determination of Lightweight UAVs
Eling, Christian; Klingbeil, Lasse; Kuhlmann, Heiner
2015-01-01
In this paper, a newly-developed direct georeferencing system for the guidance, navigation and control of lightweight unmanned aerial vehicles (UAVs), having a weight limit of 5 kg and a size limit of 1.5 m, and for UAV-based surveying and remote sensing applications is presented. The system is intended to provide highly accurate positions and attitudes (better than 5 cm and 0.5∘) in real time, using lightweight components. The main focus of this paper is on the attitude determination with the system. This attitude determination is based on an onboard single-frequency GPS baseline, MEMS (micro-electro-mechanical systems) inertial sensor readings, magnetic field observations and a 3D position measurement. All of this information is integrated in a sixteen-state error space Kalman filter. Special attention in the algorithm development is paid to the carrier phase ambiguity resolution of the single-frequency GPS baseline observations. We aim at a reliable and instantaneous ambiguity resolution, since the system is used in urban areas, where frequent losses of the GPS signal lock occur and the GPS measurement conditions are challenging. Flight tests and a comparison to a navigation-grade inertial navigation system illustrate the performance of the developed system in dynamic situations. Evaluations show that the accuracies of the system are 0.05∘ for the roll and the pitch angle and 0.2∘ for the yaw angle. The ambiguities of the single-frequency GPS baseline can be resolved instantaneously in more than 90% of the cases. PMID:26501281
Schonberger, Robert B; Gilbertsen, Todd; Dai, Feng
2013-01-01
Objective(s) Observational database research frequently relies on imperfect administrative markers to determine comorbid status, and it is difficult to infer to what extent the associated misclassification impacts validity in multivariable analyses. The effect that imperfect markers of disease will have on validity in situations where researchers attempt to match populations that have strong baseline health differences is underemphasized as a limitation in some otherwise high-quality observational studies. The present simulations were designed as a quantitative demonstration of the importance of this common and underappreciated issue. Design Two groups of Monte Carlo simulations were performed. The first demonstrated the degree to which controlling for a series of imperfect markers of disease between different populations taking 2 hypothetically harmless drugs would lead to spurious associations between drug assignment and mortality. The second Monte Carlo simulation applied this principle to a recent study in the field of anesthesiology that purported to show increased perioperative mortality in patients taking metoprolol versus atenolol. Setting/Participants/Interventions None. Measurements and Main Results Simulation 1: High type 1 error (ie, false positive findings of an independent association between drug assignment and mortality) was observed as sensitivity and specificity declined and as systematic differences in disease prevalence increased. Simulation 2: Propensity score matching across several imperfect markers was unlikely to eliminate important baseline health disparities in the referenced study. Conclusions In situations where large baseline health disparities exist between populations, matching on imperfect markers of disease may result in strong bias away from the null hypothesis. PMID:23962461
Viallon, Magalie; Terraz, Sylvain; Roland, Joerg; Dumont, Erik; Becker, Christoph D; Salomir, Rares
2010-04-01
MR thermometry based on the proton resonance frequency shift (PRFS) is the most commonly used method for the monitoring of thermal therapies. As the chemical shift of water protons is temperature dependent, the local temperature variation (relative to an initial baseline) may be calculated from time-dependent phase changes in gradient-echo (GRE) MR images. Dynamic phase shift in GRE images is also produced by time-dependent changes in the magnetic bulk susceptibility of tissue. Gas bubbles (known as "white cavitation") are frequently visualized near the RF electrode in ultrasonography-guided radio frequency ablation (RFA). This study aimed to investigate RFA-induced cavitation's effects by using simultaneous ultrasonography and MRI, to both visualize the cavitation and quantify the subsequent magnetic susceptibility-mediated errors in concurrent PRFS MR-thermometry (MRT) as well as to propose a first-order correction for the latter errors. RF heating in saline gels and in ex vivo tissues was performed with MR-compatible bipolar and monopolar electrodes inside a 1.5 T MR clinical scanner. Ultrasonography simultaneous to PRFS MRT was achieved using a MR-compatible phased-array ultrasonic transducer. PRFS MRT was performed interleaved in three orthogonal planes and compared to measurements from fluoroptic sensors, under low and, respectively, high RFA power levels. Control experiments were performed to isolate the main source of errors in standard PRFS thermometry. Ultrasonography, MRI and digital camera pictures clearly demonstrated generation of bubbles every time when operating the radio frequency equipment at therapeutic powers (> or = 30 W). Simultaneous bimodal (ultrasonography and MRI) monitoring of high power RF heating demonstrated a correlation between the onset of the PRFS-thermometry errors and the appearance of bubbles around the applicator. In an ex vivo study using a bipolar RF electrode under low power level (5 W), the MR measured temperature curves accurately matched the reference fluoroptic data. In similar ex vivo studies when applying higher RFA power levels (30 W), the correlation plots of MR thermometry versus fluoroptic data showed large errors in PRFS-derived temperature (up to 45 degrees C absolute deviation, positive or negative) depending not only on fluoroptic tip position but also on the RF electrode orientation relative to the B0 axis. Regions with apparent decrease in the PRFS-derived temperature maps as much as 30 degrees C below the initial baseline were visualized during RFA high power application. Ex vivo data were corrected assuming a Gaussian dynamic source of susceptibility, centered in the anode/cathode gap of the RF bipolar electrode. After correction, the temperature maps recovered the revolution symmetry pattern predicted by theory and matched the fluoroptic data within 4.5 degrees C mean offset. RFA induces dynamic changes in magnetic bulk susceptibility in biological tissue, resulting in large and spatially dependent errors of phase-subtraction-only PRFS MRT and unexploitable thermal dose maps. These thermometry artifacts were strongly correlated with the appearance of transient cavitation. A first-order dynamic model of susceptibility provided a useful method for minimizing these artifacts in phantom and ex vivo experiments.
Chen, Chien P; Braunstein, Steve; Mourad, Michelle; Hsu, I-Chow J; Haas-Kogan, Daphne; Roach, Mack; Fogh, Shannon E
2015-01-01
Accurate International Classification of Diseases (ICD) diagnosis coding is critical for patient care, billing purposes, and research endeavors. In this single-institution study, we evaluated our baseline ICD-9 (9th revision) diagnosis coding accuracy, identified the most common errors contributing to inaccurate coding, and implemented a multimodality strategy to improve radiation oncology coding. We prospectively studied ICD-9 coding accuracy in our radiation therapy--specific electronic medical record system. Baseline ICD-9 coding accuracy was obtained from chart review targeting ICD-9 coding accuracy of all patients treated at our institution between March and June of 2010. To improve performance an educational session highlighted common coding errors, and a user-friendly software tool, RadOnc ICD Search, version 1.0, for coding radiation oncology specific diagnoses was implemented. We then prospectively analyzed ICD-9 coding accuracy for all patients treated from July 2010 to June 2011, with the goal of maintaining 80% or higher coding accuracy. Data on coding accuracy were analyzed and fed back monthly to individual providers. Baseline coding accuracy for physicians was 463 of 661 (70%) cases. Only 46% of physicians had coding accuracy above 80%. The most common errors involved metastatic cases, whereby primary or secondary site ICD-9 codes were either incorrect or missing, and special procedures such as stereotactic radiosurgery cases. After implementing our project, overall coding accuracy rose to 92% (range, 86%-96%). The median accuracy for all physicians was 93% (range, 77%-100%) with only 1 attending having accuracy below 80%. Incorrect primary and secondary ICD-9 codes in metastatic cases showed the most significant improvement (10% vs 2% after intervention). Identifying common coding errors and implementing both education and systems changes led to significantly improved coding accuracy. This quality assurance project highlights the potential problem of ICD-9 coding accuracy by physicians and offers an approach to effectively address this shortcoming. Copyright © 2015. Published by Elsevier Inc.
The AuScope geodetic VLBI array
NASA Astrophysics Data System (ADS)
Lovell, J. E. J.; McCallum, J. N.; Reid, P. B.; McCulloch, P. M.; Baynes, B. E.; Dickey, J. M.; Shabala, S. S.; Watson, C. S.; Titov, O.; Ruddick, R.; Twilley, R.; Reynolds, C.; Tingay, S. J.; Shield, P.; Adada, R.; Ellingsen, S. P.; Morgan, J. S.; Bignall, H. E.
2013-06-01
The AuScope geodetic Very Long Baseline Interferometry array consists of three new 12-m radio telescopes and a correlation facility in Australia. The telescopes at Hobart (Tasmania), Katherine (Northern Territory) and Yarragadee (Western Australia) are co-located with other space geodetic techniques including Global Navigation Satellite Systems (GNSS) and gravity infrastructure, and in the case of Yarragadee, satellite laser ranging (SLR) and Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS) facilities. The correlation facility is based in Perth (Western Australia). This new facility will make significant contributions to improving the densification of the International Celestial Reference Frame in the Southern Hemisphere, and subsequently enhance the International Terrestrial Reference Frame through the ability to detect and mitigate systematic error. This, combined with the simultaneous densification of the GNSS network across Australia, will enable the improved measurement of intraplate deformation across the Australian tectonic plate. In this paper, we present a description of this new infrastructure and present some initial results, including telescope performance measurements and positions of the telescopes in the International Terrestrial Reference Frame. We show that this array is already capable of achieving centimetre precision over typical long-baselines and that network and reference source systematic effects must be further improved to reach the ambitious goals of VLBI2010.
NASA Astrophysics Data System (ADS)
Huang, Chong; Radabaugh, Jeffrey P.; Aouad, Rony K.; Lin, Yu; Gal, Thomas J.; Patel, Amit B.; Valentino, Joseph; Shang, Yu; Yu, Guoqiang
2015-07-01
Knowledge of tissue blood flow (BF) changes after free tissue transfer may enable surgeons to predict the failure of flap thrombosis at an early stage. This study used our recently developed noncontact diffuse correlation spectroscopy to monitor dynamic BF changes in free flaps without getting in contact with the targeted tissue. Eight free flaps were elevated in patients with head and neck cancer; one of the flaps failed. Multiple BF measurements probing the transferred tissue were performed during and post the surgical operation. Postoperative BF values were normalized to the intraoperative baselines (assigning "1") for the calculation of relative BF change (rBF). The rBF changes over the seven successful flaps were 1.89±0.15, 2.26±0.13, and 2.43±0.13 (mean±standard error), respectively, on postoperative days 2, 4, and 7. These postoperative values were significantly higher than the intraoperative baseline values (p<0.001), indicating a gradual recovery of flap vascularity after the tissue transfer. By contrast, rBF changes observed from the unsuccessful flaps were 1.14 and 1.34, respectively, on postoperative days 2 and 4, indicating less flow recovery. Measurement of BF recovery after flap anastomosis holds the potential to act early to salvage ischemic flaps.
Huang, Chong; Radabaugh, Jeffrey P.; Aouad, Rony K.; Lin, Yu; Gal, Thomas J.; Patel, Amit B.; Valentino, Joseph; Shang, Yu; Yu, Guoqiang
2015-01-01
Abstract. Knowledge of tissue blood flow (BF) changes after free tissue transfer may enable surgeons to predict the failure of flap thrombosis at an early stage. This study used our recently developed noncontact diffuse correlation spectroscopy to monitor dynamic BF changes in free flaps without getting in contact with the targeted tissue. Eight free flaps were elevated in patients with head and neck cancer; one of the flaps failed. Multiple BF measurements probing the transferred tissue were performed during and post the surgical operation. Postoperative BF values were normalized to the intraoperative baselines (assigning “1”) for the calculation of relative BF change (rBF). The rBF changes over the seven successful flaps were 1.89±0.15, 2.26±0.13, and 2.43±0.13 (mean±standard error), respectively, on postoperative days 2, 4, and 7. These postoperative values were significantly higher than the intraoperative baseline values (p<0.001), indicating a gradual recovery of flap vascularity after the tissue transfer. By contrast, rBF changes observed from the unsuccessful flaps were 1.14 and 1.34, respectively, on postoperative days 2 and 4, indicating less flow recovery. Measurement of BF recovery after flap anastomosis holds the potential to act early to salvage ischemic flaps. PMID:26187444
Measurement System Characterization in the Presence of Measurement Errors
NASA Technical Reports Server (NTRS)
Commo, Sean A.
2012-01-01
In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.
Comparative study of outcome measures and analysis methods for traumatic brain injury trials.
Alali, Aziz S; Vavrek, Darcy; Barber, Jason; Dikmen, Sureyya; Nathens, Avery B; Temkin, Nancy R
2015-04-15
Batteries of functional and cognitive measures have been proposed as alternatives to the Extended Glasgow Outcome Scale (GOSE) as the primary outcome for traumatic brain injury (TBI) trials. We evaluated several approaches to analyzing GOSE and a battery of four functional and cognitive measures. Using data from a randomized trial, we created a "super" dataset of 16,550 subjects from patients with complete data (n=331) and then simulated multiple treatment effects across multiple outcome measures. Patients were sampled with replacement (bootstrapping) to generate 10,000 samples for each treatment effect (n=400 patients/group). The percentage of samples where the null hypothesis was rejected estimates the power. All analytic techniques had appropriate rates of type I error (≤5%). Accounting for baseline prognosis either by using sliding dichotomy for GOSE or using regression-based methods substantially increased the power over the corresponding analysis without accounting for prognosis. Analyzing GOSE using multivariate proportional odds regression or analyzing the four-outcome battery with regression-based adjustments had the highest power, assuming equal treatment effect across all components. Analyzing GOSE using a fixed dichotomy provided the lowest power for both unadjusted and regression-adjusted analyses. We assumed an equal treatment effect for all measures. This may not be true in an actual clinical trial. Accounting for baseline prognosis is critical to attaining high power in Phase III TBI trials. The choice of primary outcome for future trials should be guided by power, the domain of brain function that an intervention is likely to impact, and the feasibility of collecting outcome data.
Comparative Study of Outcome Measures and Analysis Methods for Traumatic Brain Injury Trials
Alali, Aziz S.; Vavrek, Darcy; Barber, Jason; Dikmen, Sureyya; Nathens, Avery B.
2015-01-01
Abstract Batteries of functional and cognitive measures have been proposed as alternatives to the Extended Glasgow Outcome Scale (GOSE) as the primary outcome for traumatic brain injury (TBI) trials. We evaluated several approaches to analyzing GOSE and a battery of four functional and cognitive measures. Using data from a randomized trial, we created a “super” dataset of 16,550 subjects from patients with complete data (n=331) and then simulated multiple treatment effects across multiple outcome measures. Patients were sampled with replacement (bootstrapping) to generate 10,000 samples for each treatment effect (n=400 patients/group). The percentage of samples where the null hypothesis was rejected estimates the power. All analytic techniques had appropriate rates of type I error (≤5%). Accounting for baseline prognosis either by using sliding dichotomy for GOSE or using regression-based methods substantially increased the power over the corresponding analysis without accounting for prognosis. Analyzing GOSE using multivariate proportional odds regression or analyzing the four-outcome battery with regression-based adjustments had the highest power, assuming equal treatment effect across all components. Analyzing GOSE using a fixed dichotomy provided the lowest power for both unadjusted and regression-adjusted analyses. We assumed an equal treatment effect for all measures. This may not be true in an actual clinical trial. Accounting for baseline prognosis is critical to attaining high power in Phase III TBI trials. The choice of primary outcome for future trials should be guided by power, the domain of brain function that an intervention is likely to impact, and the feasibility of collecting outcome data. PMID:25317951
CDGPS-Based Relative Navigation for Multiple Spacecraft
NASA Technical Reports Server (NTRS)
Mitchell, Megan Leigh
2004-01-01
This thesis investigates the use of Carrier-phase Differential GPS (CDGPS) in relative navigation filters for formation flying spacecraft. This work analyzes the relationship between the Extended Kalman Filter (EKF) design parameters and the resulting estimation accuracies, and in particular, the effect of the process and measurement noises on the semimajor axis error. This analysis clearly demonstrates that CDGPS-based relative navigation Kalman filters yield good estimation performance without satisfying the strong correlation property that previous work had associated with "good" navigation filters. Several examples are presented to show that the Kalman filter can be forced to create solutions with stronger correlations, but these always result in larger semimajor axis errors. These linear and nonlinear simulations also demonstrated the crucial role of the process noise in determining the semimajor axis knowledge. More sophisticated nonlinear models were included to reduce the propagation error in the estimator, but for long time steps and large separations, the EKF, which only uses a linearized covariance propagation, yielded very poor performance. In contrast, the CDGPS-based Unscented Kalman relative navigation Filter (UKF) handled the dynamic and measurement nonlinearities much better and yielded far superior performance than the EKF. The UKF produced good estimates for scenarios with long baselines and time steps for which the EKF would diverge rapidly. A hardware-in-the-loop testbed that is compatible with the Spirent Simulator at NASA GSFC was developed to provide a very flexible and robust capability for demonstrating CDGPS technologies in closed-loop. This extended previous work to implement the decentralized relative navigation algorithms in real time.
Impacts of Satellite Orbit and Clock on Real-Time GPS Point and Relative Positioning.
Shi, Junbo; Wang, Gaojing; Han, Xianquan; Guo, Jiming
2017-06-12
Satellite orbit and clock corrections are always treated as known quantities in GPS positioning models. Therefore, any error in the satellite orbit and clock products will probably cause significant consequences for GPS positioning, especially for real-time applications. Currently three types of satellite products have been made available for real-time positioning, including the broadcast ephemeris, the International GNSS Service (IGS) predicted ultra-rapid product, and the real-time product. In this study, these three predicted/real-time satellite orbit and clock products are first evaluated with respect to the post-mission IGS final product, which demonstrates cm to m level orbit accuracies and sub-ns to ns level clock accuracies. Impacts of real-time satellite orbit and clock products on GPS point and relative positioning are then investigated using the P3 and GAMIT software packages, respectively. Numerical results show that the real-time satellite clock corrections affect the point positioning more significantly than the orbit corrections. On the contrary, only the real-time orbit corrections impact the relative positioning. Compared with the positioning solution using the IGS final product with the nominal orbit accuracy of ~2.5 cm, the real-time broadcast ephemeris with ~2 m orbit accuracy provided <2 cm relative positioning error for baselines no longer than 216 km. As for the baselines ranging from 574 to 2982 km, the cm-dm level positioning error was identified for the relative positioning solution using the broadcast ephemeris. The real-time product could result in <5 mm relative positioning accuracy for baselines within 2982 km, slightly better than the predicted ultra-rapid product.
Extragalactic radio sources - Accurate positions from very-long-baseline interferometry observations
NASA Technical Reports Server (NTRS)
Rogers, A. E. E.; Counselman, C. C., III; Hinteregger, H. F.; Knight, C. A.; Robertson, D. S.; Shapiro, I. I.; Whitney, A. R.; Clark, T. A.
1973-01-01
Relative positions for 12 extragalactic radio sources have been determined via wide-band very-long-baseline interferometry (wavelength of about 3.8 cm). The standard error, based on consistency between results from widely separated periods of observation, appears to be no more than 0.1 sec for each coordinate of the seven sources that were well observed during two or more periods. The uncertainties in the coordinates determined for the other five sources are larger, but in no case exceed 0.5 sec.
Hartwig, Andreas; Charman, William Neil; Radhakrishnan, Hema
2016-01-01
To determine whether the initial characteristics of individual patterns of peripheral refraction relate to subsequent changes in refraction over a one-year period. 54 myopic and emmetropic subjects (mean age: 24.9±5.1 years; median 24 years) with normal vision were recruited and underwent conventional non-cycloplegic subjective refraction. Peripheral refraction was also measured at 5° intervals over the central 60° of horizontal visual field, together with axial length. After one year, measurements of subjective refraction and axial length were repeated on the 43 subjects who were still available for examination. In agreement with earlier studies, higher myopes tended to show greater relative peripheral hyperopia. There was, however, considerable inter-subject variation in the pattern of relative peripheral refractive error (RPRE) at any level of axial refraction. Across the group, mean one-year changes in axial refraction and axial length did not differ significantly from zero. There was no correlation between changes in these parameters for individual subjects and any characteristic of their RPRE. No evidence was found to support the hypothesis that the pattern of RPRE is predictive of subsequent refractive change in this age group. Copyright © 2015 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.
NASA Technical Reports Server (NTRS)
Takallu, M. A.; Glaab, L. J.; Hughes, M. F.; Wong, D. T.; Bartolone, A. P.
2008-01-01
In support of the NASA Aviation Safety Program's Synthetic Vision Systems Project, a series of piloted simulations were conducted to explore and quantify the relationship between candidate Terrain Portrayal Concepts and Guidance Symbology Concepts, specific to General Aviation. The experiment scenario was based on a low altitude en route flight in Instrument Metrological Conditions in the central mountains of Alaska. A total of 18 general aviation pilots, with three levels of pilot experience, evaluated a test matrix of four terrain portrayal concepts and six guidance symbology concepts. Quantitative measures included various pilot/aircraft performance data, flight technical errors and flight control inputs. The qualitative measures included pilot comments and pilot responses to the structured questionnaires such as perceived workload, subjective situation awareness, pilot preferences, and the rare event recognition. There were statistically significant effects found from guidance symbology concepts and terrain portrayal concepts but no significant interactions between them. Lower flight technical errors and increased situation awareness were achieved using Synthetic Vision Systems displays, as compared to the baseline Pitch/Roll Flight Director and Blue Sky Brown Ground combination. Overall, those guidance symbology concepts that have both path based guidance cue and tunnel display performed better than the other guidance concepts.
NASA Astrophysics Data System (ADS)
Pace, Phillip Eric; Tan, Chew Kung; Ong, Chee K.
2018-02-01
Direction finding (DF) systems are fundamental electronic support measures for electronic warfare. A number of DF techniques have been developed over the years; however, these systems are limited in bandwidth and resolution and suffer from a complex design for frequency downconversion. The design of a photonic DF technique for the detection and DF of low probability of intercept (LPI) signals is investigated. Key advantages of this design include a small baseline, wide bandwidth, high resolution, minimal space, weight, and power requirement. A robust postprocessing algorithm that utilizes the minimum Euclidean distance detector provides consistence and accurate estimation of angle of arrival (AoA) for a wide range of LPI waveforms. Experimental tests using frequency modulation continuous wave (FMCW) and P4 modulation signals were conducted in an anechoic chamber to verify the system design. Test results showed that the photonic DF system is capable of measuring the AoA of the LPI signals with 1-deg resolution over a 180 deg field-of-view. For an FMCW signal, the AoA was determined with a RMS error of 0.29 deg at 1-deg resolution. For a P4 coded signal, the RMS error in estimating the AoA is 0.32 deg at 1-deg resolution.
Stillwater Hybrid Geo-Solar Power Plant Optimization Analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendt, Daniel S.; Mines, Gregory L.; Turchi, Craig S.
2015-09-02
The Stillwater Power Plant is the first hybrid plant in the world able to bring together a medium-enthalpy geothermal unit with solar thermal and solar photovoltaic systems. Solar field and power plant models have been developed to predict the performance of the Stillwater geothermal / solar-thermal hybrid power plant. The models have been validated using operational data from the Stillwater plant. A preliminary effort to optimize performance of the Stillwater hybrid plant using optical characterization of the solar field has been completed. The Stillwater solar field optical characterization involved measurement of mirror reflectance, mirror slope error, and receiver position error.more » The measurements indicate that the solar field may generate 9% less energy than the design value if an appropriate tracking offset is not employed. A perfect tracking offset algorithm may be able to boost the solar field performance by about 15%. The validated Stillwater hybrid plant models were used to evaluate hybrid plant operating strategies including turbine IGV position optimization, ACC fan speed and turbine IGV position optimization, turbine inlet entropy control using optimization of multiple process variables, and mixed working fluid substitution. The hybrid plant models predict that each of these operating strategies could increase net power generation relative to the baseline Stillwater hybrid plant operations.« less
Use of the VLBI delay observable for orbit determination of Earth-orbiting VLBI satellites
NASA Technical Reports Server (NTRS)
Ulvestad, J. S.
1992-01-01
Very long-baseline interferometry (VLBI) observations using a radio telescope in Earth orbit were performed first in the 1980s. Two spacecraft dedicated to VLBI are scheduled for launch in 1995; the primary scientific goals of these missions will be astrophysical in nature. This article addresses the use of space VLBI delay data for the additional purpose of improving the orbit determination of the Earth-orbiting spacecraft. In an idealized case of quasi-simultaneous observations of three radio sources in orthogonal directions, analytical expressions are found for the instantaneous spacecraft position and its error. The typical position error is at least as large as the distance corresponding to the delay measurement accuracy but can be much greater for some geometries. A number of practical considerations, such as system noise and imperfect calibrations, set bounds on the orbit-determination accuracy realistically achievable using space VLBI delay data. These effects limit the spacecraft position accuracy to at least 35 cm (and probably 3 m or more) for the first generation of dedicated space VLBI experiments. Even a 35-cm orbital accuracy would fail to provide global VLBI astrometry as accurate as ground-only VLBI. Recommended charges in future space VLBI missions are unlikely to make space VLBI competitive with ground-only VLBI in global astrometric measurements.
Ha, Jae Wook; Couper, David J.; O’Neal, Wanda K.; Barr, R. Graham; Bleecker, Eugene R.; Carretta, Elizabeth E.; Cooper, Christopher B.; Doerschuk, Claire M.; Drummond, M Bradley; Han, MeiLan K.; Hansel, Nadia N.; Kim, Victor; Kleerup, Eric C.; Martinez, Fernando J.; Rennard, Stephen I.; Tashkin, Donald; Woodruff, Prescott G.; Paine, Robert; Curtis, Jeffrey L.; Kanner, Richard E.
2017-01-01
Rationale Understanding the reliability and repeatability of clinical measurements used in the diagnosis, treatment and monitoring of disease progression is of critical importance across all disciplines of clinical practice and in clinical trials to assess therapeutic efficacy and safety. Objectives Our goal is to understand normal variability for assessing true changes in health status and to more accurately utilize this data to differentiate disease characteristics and outcomes. Methods Our study is the first study designed entirely to establish the repeatability of a large number of instruments utilized for the clinical assessment of COPD in the same subjects over the same period. We utilized SPIROMICS participants (n = 98) that returned to their clinical center within 6 weeks of their baseline visit to repeat complete baseline assessments. Demographics, spirometry, questionnaires, complete blood cell counts (CBC), medical history, and emphysema status by computerized tomography (CT) imaging were obtained. Results Pulmonary function tests (PFTs) were highly repeatable (ICC’s >0.9) but the 6 minute walk (6MW) was less so (ICC = 0.79). Among questionnaires, the Saint George’s Respiratory Questionnaire (SGRQ) was most repeatable. Self-reported clinical features, such as exacerbation history, and features of chronic bronchitis, often produced kappa values <0.6. Reported age at starting smoking and average number of cigarettes smoked were modestly repeatable (kappa = 0.76 and 0.79). Complete blood counts (CBC) variables produced intraclass correlation coefficients (ICC) values between 0.6 and 0.8. Conclusions PFTs were highly repeatable, while subjective measures and subject recall were more variable. Analyses using features with poor repeatability could lead to misclassification and outcome errors. Hence, care should be taken when interpreting change in clinical features based on measures with low repeatability. Efforts to improve repeatability of key clinical features such as exacerbation history and chronic bronchitis are warranted. PMID:28934249
NASA Astrophysics Data System (ADS)
Scherneck, Hans-Georg; Haas, Rüdiger
We show the influence of horizontal displacements due to ocean tide loading on the determination of polar motion and UT1 (PMU) on the daily and subdaily timescale. So called ‘virtual PMU variations’ due to modelling errors of ocean tide loading are predicted for geodetic Very Long Baseline Interferometry (VLBI) networks. This leads to errors of subdaily determination of PMU. The predicted effects are confirmed by the analysis of geodetic VLBI observations.
Space shuttle navigation analysis
NASA Technical Reports Server (NTRS)
Jones, H. L.; Luders, G.; Matchett, G. A.; Sciabarrasi, J. E.
1976-01-01
A detailed analysis of space shuttle navigation for each of the major mission phases is presented. A covariance analysis program for prelaunch IMU calibration and alignment for the orbital flight tests (OFT) is described, and a partial error budget is presented. The ascent, orbital operations and deorbit maneuver study considered GPS-aided inertial navigation in the Phase III GPS (1984+) time frame. The entry and landing study evaluated navigation performance for the OFT baseline system. Detailed error budgets and sensitivity analyses are provided for both the ascent and entry studies.
Quantitative myocardial perfusion from static cardiac and dynamic arterial CT
NASA Astrophysics Data System (ADS)
Bindschadler, Michael; Branch, Kelley R.; Alessio, Adam M.
2018-05-01
Quantitative myocardial blood flow (MBF) estimation by dynamic contrast enhanced cardiac computed tomography (CT) requires multi-frame acquisition of contrast transit through the blood pool and myocardium to inform the arterial input and tissue response functions. Both the input and the tissue response functions for the entire myocardium are sampled with each acquisition. However, the long breath holds and frequent sampling can result in significant motion artifacts and relatively high radiation dose. To address these limitations, we propose and evaluate a new static cardiac and dynamic arterial (SCDA) quantitative MBF approach where (1) the input function is well sampled using either prediction from pre-scan timing bolus data or measured from dynamic thin slice ‘bolus tracking’ acquisitions, and (2) the whole-heart tissue response data is limited to one contrast enhanced CT acquisition. A perfusion model uses the dynamic arterial input function to generate a family of possible myocardial contrast enhancement curves corresponding to a range of MBF values. Combined with the timing of the single whole-heart acquisition, these curves generate a lookup table relating myocardial contrast enhancement to quantitative MBF. We tested the SCDA approach in 28 patients that underwent a full dynamic CT protocol both at rest and vasodilator stress conditions. Using measured input function plus single (enhanced CT only) or plus double (enhanced and contrast free baseline CT’s) myocardial acquisitions yielded MBF estimates with root mean square (RMS) error of 1.2 ml/min/g and 0.35 ml/min/g, and radiation dose reductions of 90% and 83%, respectively. The prediction of the input function based on timing bolus data and the static acquisition had an RMS error compared to the measured input function of 26.0% which led to MBF estimation errors greater than threefold higher than using the measured input function. SCDA presents a new, simplified approach for quantitative perfusion imaging with an acquisition strategy offering substantial radiation dose and computational complexity savings over dynamic CT.
Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette
2018-05-01
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A meta-analysis of inhibitory-control deficits in patients diagnosed with Alzheimer's dementia.
Kaiser, Anna; Kuhlmann, Beatrice G; Bosnjak, Michael
2018-05-10
The authors conducted meta-analyses to determine the magnitude of performance impairments in patients diagnosed with Alzheimer's dementia (AD) compared with healthy aging (HA) controls on eight tasks commonly used to measure inhibitory control. Response time (RT) and error rates from a total of 64 studies were analyzed with random-effects models (overall effects) and mixed-effects models (moderator analyses). Large differences between AD patients and HA controls emerged in the basic inhibition conditions of many of the tasks with AD patients often performing slower, overall d = 1.17, 95% CI [0.88-1.45], and making more errors, d = 0.83 [0.63-1.03]. However, comparably large differences were also present in performance on many of the baseline control-conditions, d = 1.01 [0.83-1.19] for RTs and d = 0.44 [0.19-0.69] for error rates. A standardized derived inhibition score (i.e., control-condition score - inhibition-condition score) suggested no significant mean group difference for RTs, d = -0.07 [-0.22-0.08], and only a small difference for errors, d = 0.24 [-0.12-0.60]. Effects systematically varied across tasks and with AD severity. Although the error rate results suggest a specific deterioration of inhibitory-control abilities in AD, further processes beyond inhibitory control (e.g., a general reduction in processing speed and other, task-specific attentional processes) appear to contribute to AD patients' performance deficits observed on a variety of inhibitory-control tasks. Nonetheless, the inhibition conditions of many of these tasks well discriminate between AD patients and HA controls. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Fuller-Rowell, Thomas E; Curtis, David S; Doan, Stacey N; Coe, Christopher L
2015-01-01
The current study examined the prospective effects of educational attainment on proinflammatory physiology among African American and white adults. Participants were 1192 African Americans and 1487 whites who participated in Year 5 (mean [standard deviation] age = 30 [3.5] years), and Year 20 (mean [standard deviation] age = 45 [3.5]) of an ongoing longitudinal study. Initial analyses focused on age-related changes in fibrinogen across racial groups, and parallel analyses for C-reactive protein and interleukin-6 assessed at Year 20. Models then estimated the effects of educational attainment on changes in inflammation for African Americans and whites before and after controlling for four blocks of covariates: a) early life adversity, b) health and health behaviors at baseline, c) employment and financial measures at baseline and follow-up, and d) psychosocial stresses in adulthood. African Americans had larger increases in fibrinogen over time than whites (B = 24.93, standard error = 3.24, p < .001), and 37% of this difference was explained after including all covariates. Effects of educational attainment were weaker for African Americans than for whites (B = 10.11, standard error = 3.29, p = .002), and only 8% of this difference was explained by covariates. Analyses for C-reactive protein and interleukin-6 yielded consistent results. The effects of educational attainment on inflammation levels were stronger for white than for African American participants. Why African Americans do not show the same health benefits with educational attainment is an important question for health disparities research.
NASA Astrophysics Data System (ADS)
Kelly, J. J.; Gayou, O.; Roché, R. E.; Chai, Z.; Jones, M. K.; Sarty, A. J.; Frullani, S.; Aniol, K.; Beise, E. J.; Benmokhtar, F.; Bertozzi, W.; Boeglin, W. U.; Botto, T.; Brash, E. J.; Breuer, H.; Brown, E.; Burtin, E.; Calarco, J. R.; Cavata, C.; Chang, C. C.; Chant, N. S.; Chen, J.-P.; Coman, M.; Crovelli, D.; Leo, R. De; Dieterich, S.; Escoffier, S.; Fissum, K. G.; Garde, V.; Garibaldi, F.; Georgakopoulos, S.; Gilad, S.; Gilman, R.; Glashausser, C.; Hansen, J.-O.; Higinbotham, D. W.; Hotta, A.; Huber, G. M.; Ibrahim, H.; Iodice, M.; Jager, C. W. De; Jiang, X.; Klimenko, A.; Kozlov, A.; Kumbartzki, G.; Kuss, M.; Lagamba, L.; Laveissière, G.; Lerose, J. J.; Lindgren, R. A.; Liyange, N.; Lolos, G. J.; Lourie, R. W.; Margaziotis, D. J.; Marie, F.; Markowitz, P.; McAleer, S.; Meekins, D.; Michaels, R.; Milbrath, B. D.; Mitchell, J.; Nappa, J.; Neyret, D.; Perdrisat, C. F.; Potokar, M.; Punjabi, V. A.; Pussieux, T.; Ransome, R. D.; Roos, P. G.; Rvachev, M.; Saha, A.; Širca, S.; Suleiman, R.; Strauch, S.; Templon, J. A.; Todor, L.; Ulmer, P. E.; Urciuoli, G. M.; Weinstein, L. B.; Wijsooriya, K.; Wojtsekhowski, B.; Zheng, X.; Zhu, L.
2007-02-01
We measured angular distributions of differential cross section, beam analyzing power, and recoil polarization for neutral pion electroproduction at Q2=1.0(GeV/c)2 in 10 bins of 1.17⩽W⩽1.35 GeV across the Δ resonance. A total of 16 independent response functions were extracted, of which 12 were observed for the first time. Comparisons with recent model calculations show that response functions governed by real parts of interference products are determined relatively well near the physical mass, W=MΔ≈1.232 GeV, but the variation among models is large for response functions governed by imaginary parts, and for both types of response functions, the variation increases rapidly with W>MΔ. We performed a multipole analysis that adjusts suitable subsets of ℓπ⩽2 amplitudes with higher partial waves constrained by baseline models. This analysis provides both real and imaginary parts. The fitted multipole amplitudes are nearly model independent—there is very little sensitivity to the choice of baseline model or truncation scheme. By contrast, truncation errors in the traditional Legendre analysis of N→Δ quadrupole ratios are not negligible. Parabolic fits to the W dependence around MΔ for the multiple analysis gives values for Re(S1+/M1+)=(-6.61±0.18)% and Re(E1+/M1+)=(-2.87±0.19)% for the pπ0 channel at W=1.232 GeV and Q2=1.0(GeV/c)2 that are distinctly larger than those from the Legendre analysis of the same data. Similarly, the multipole analysis gives Re(S0+/M1+)=(+7.1±0.8)% at W=1.232 GeV, consistent with recent models, while the traditional Legendre analysis gives the opposite sign because its truncation errors are quite severe.
The Oxford Ankle Foot Questionnaire for children: responsiveness and longitudinal validity.
Morris, Christopher; Doll, Helen; Davies, Neville; Wainwright, Andrew; Theologis, Tim; Willett, Keith; Fitzpatrick, Ray
2009-12-01
To evaluate how scores from the Oxford Ankle Foot Questionnaire change over time and with treatment using both distribution-based and anchor-based approaches. Eighty children aged 5-16 and their parent or career completed questionnaires at orthopaedic or trauma outpatient clinics. They were asked to complete and return a second set of questionnaires again within 2 weeks (retest), and then mailed a third set of questionnaires to complete again after 2 months (follow-up). The follow-up questionnaires included a global rating of change 'transition' item. Child- and parent-reported mean domain scores (Physical, School & Play, and Emotional) were all stable at retest, whereas positive mean changes were observed at follow-up. As we hypothesised, trauma patients had poorer scores than elective patients at baseline, and showed greater improvement at follow-up. For trauma patients, mean changes in per cent scores were large (scores improved between 40 and 56 for the Physical and School & Play domains, and 17 and 21 for Emotional); all effect sizes (ES) were large (>0.8). For elective patients, the mean improvement in per cent scores were more moderate (Physical: child 10, ES = 0.4, parent 11, ES = 0.5; School & Play child 0, ES = 0, parent 9 ES = 0.4; Emotional: child 6, ES = 0.2; parents 8, ES > 0.3). Minimal detectable change (MDC(90)), an indication of measurement error, ranged from 6 to 8. Half the standard deviation of baseline scores ranged from 11 to 18. Minimal important difference could only be calculated for elective patients (9 child and 13 parent ratings), these ranged from 7 to 17. The findings support the responsiveness and longitudinal validity of the scales. Changes in domain scores of, or exceeding, the MDC(90) (6-8) are likely to be beyond measurement error; further work is required to refine the estimate of change that can be considered important.
Creasy, John M; Midya, Abhishek; Chakraborty, Jayasree; Adams, Lauryn B; Gomes, Camilla; Gonen, Mithat; Seastedt, Kenneth P; Sutton, Elizabeth J; Cercek, Andrea; Kemeny, Nancy E; Shia, Jinru; Balachandran, Vinod P; Kingham, T Peter; Allen, Peter J; DeMatteo, Ronald P; Jarnagin, William R; D'Angelica, Michael I; Do, Richard K G; Simpson, Amber L
2018-06-19
This study investigates whether quantitative image analysis of pretreatment CT scans can predict volumetric response to chemotherapy for patients with colorectal liver metastases (CRLM). Patients treated with chemotherapy for CRLM (hepatic artery infusion (HAI) combined with systemic or systemic alone) were included in the study. Patients were imaged at baseline and approximately 8 weeks after treatment. Response was measured as the percentage change in tumour volume from baseline. Quantitative imaging features were derived from the index hepatic tumour on pretreatment CT, and features statistically significant on univariate analysis were included in a linear regression model to predict volumetric response. The regression model was constructed from 70% of data, while 30% were reserved for testing. Test data were input into the trained model. Model performance was evaluated with mean absolute prediction error (MAPE) and R 2 . Clinicopatholologic factors were assessed for correlation with response. 157 patients were included, split into training (n = 110) and validation (n = 47) sets. MAPE from the multivariate linear regression model was 16.5% (R 2 = 0.774) and 21.5% in the training and validation sets, respectively. Stratified by HAI utilisation, MAPE in the validation set was 19.6% for HAI and 25.1% for systemic chemotherapy alone. Clinical factors associated with differences in median tumour response were treatment strategy, systemic chemotherapy regimen, age and KRAS mutation status (p < 0.05). Quantitative imaging features extracted from pretreatment CT are promising predictors of volumetric response to chemotherapy in patients with CRLM. Pretreatment predictors of response have the potential to better select patients for specific therapies. • Colorectal liver metastases (CRLM) are downsized with chemotherapy but predicting the patients that will respond to chemotherapy is currently not possible. • Heterogeneity and enhancement patterns of CRLM can be measured with quantitative imaging. • Prediction model constructed that predicts volumetric response with 20% error suggesting that quantitative imaging holds promise to better select patients for specific treatments.
Yang, Jie; Liu, Qingquan; Dai, Wei
2017-02-01
To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.
NASA Technical Reports Server (NTRS)
Ma, Chopo; Gordon, David; MacMillan, Daniel
1999-01-01
Precise geodetic Very Long Baseline Interferometry (VLBI) measurements have been made since 1979 at about 130 points on all major tectonic plates, including stable interiors and deformation zones. From the data set of about 2900 observing sessions and about 2.3 million observations, useful three-dimensional velocities can be derived for about 80 sites using an incremental least-squares adjustment of terrestrial, celestial, Earth rotation and site/session-specific parameters. The long history and high precision of the data yield formal errors for horizontal velocity as low as 0.1 mm/yr, but the limitation on the interpretation of individual site velocities is the tie to the terrestrial reference frame. Our studies indicate that the effect of converting precise relative VLBI velocities to individual site velocities is an error floor of about 0.4 mm/yr. Most VLBI horizontal velocities in stable plate interiors agree with the NUVEL-1A model, but there are significant departures in Africa and the Pacific. Vertical precision is worse by a factor of 2-3, and there are significant non-zero values that can be interpreted as post-glacial rebound, regional effects, and local disturbances.
Monitoring Error Rates In Illumina Sequencing.
Manley, Leigh J; Ma, Duanduan; Levine, Stuart S
2016-12-01
Guaranteeing high-quality next-generation sequencing data in a rapidly changing environment is an ongoing challenge. The introduction of the Illumina NextSeq 500 and the depreciation of specific metrics from Illumina's Sequencing Analysis Viewer (SAV; Illumina, San Diego, CA, USA) have made it more difficult to determine directly the baseline error rate of sequencing runs. To improve our ability to measure base quality, we have created an open-source tool to construct the Percent Perfect Reads (PPR) plot, previously provided by the Illumina sequencers. The PPR program is compatible with HiSeq 2000/2500, MiSeq, and NextSeq 500 instruments and provides an alternative to Illumina's quality value (Q) scores for determining run quality. Whereas Q scores are representative of run quality, they are often overestimated and are sourced from different look-up tables for each platform. The PPR's unique capabilities as a cross-instrument comparison device, as a troubleshooting tool, and as a tool for monitoring instrument performance can provide an increase in clarity over SAV metrics that is often crucial for maintaining instrument health. These capabilities are highlighted.
McKaig, Donald; Collins, Christine; Elsaid, Khaled A
2014-09-01
A study was conducted to evaluate the impact of a reengineered approach to electronic error reporting at a 719-bed multidisciplinary urban medical center. The main outcome of interest was the monthly reported medication errors during the preimplementation (20 months) and postimplementation (26 months) phases. An interrupted time series analysis was used to describe baseline errors, immediate change following implementation of the current electronic error-reporting system (e-ERS), and trend of error reporting during postimplementation. Errors were categorized according to severity using the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) Medication Error Index classifications. Reported errors were further analyzed by reporter and error site. During preimplementation, the monthly reported errors mean was 40.0 (95% confidence interval [CI]: 36.3-43.7). Immediately following e-ERS implementation, monthly reported errors significantly increased by 19.4 errors (95% CI: 8.4-30.5). The change in slope of reported errors trend was estimated at 0.76 (95% CI: 0.07-1.22). Near misses and no-patient-harm errors accounted for 90% of all errors, while errors that caused increased patient monitoring or temporary harm accounted for 9% and 1%, respectively. Nurses were the most frequent reporters, while physicians were more likely to report high-severity errors. Medical care units accounted for approximately half of all reported errors. Following the intervention, there was a significant increase in reporting of prevented errors and errors that reached the patient with no resultant harm. This improvement in reporting was sustained for 26 months and has contributed to designing and implementing quality improvement initiatives to enhance the safety of the medication use process.
Autonomous Navigation Error Propagation Assessment for Lunar Surface Mobility Applications
NASA Technical Reports Server (NTRS)
Welch, Bryan W.; Connolly, Joseph W.
2006-01-01
The NASA Vision for Space Exploration is focused on the return of astronauts to the Moon. While navigation systems have already been proven in the Apollo missions to the moon, the current exploration campaign will involve more extensive and extended missions requiring new concepts for lunar navigation. In this document, the results of an autonomous navigation error propagation assessment are provided. The analysis is intended to be the baseline error propagation analysis for which Earth-based and Lunar-based radiometric data are added to compare these different architecture schemes, and quantify the benefits of an integrated approach, in how they can handle lunar surface mobility applications when near the Lunar South pole or on the Lunar Farside.
Measurement properties of the Human Activity Profile questionnaire in hospitalized patients.
Souza, Daniel C; Wegner, Fernando; Costa, Lucíola C M; Chiavegato, Luciana D; Lunardi, Adriana C
To test the measurement properties (reproducibility, internal consistency, ceiling and floor effects, and construct validity) of the Human Activity Profile (HAP) questionnaire in hospitalized patients. This measurement properties study recruited one-hundred patients hospitalized for less than 48h for clinical or surgical reasons. The HAP was administered at baseline and after 48h in a test-retest design). The International Physical Activity Questionnaire (IPAQ-6) was also administered at baseline, aiming to assess the construct validity. We tested the following measurement properties: reproducibility (reliability assessed by type 2,1 intraclass correlation coefficient (ICC 2,1 )); agreement by the standard error of measurement (SEM) and by the minimum detectable change with 90% confidence (MDC 90 ), internal consistency by Cronbach's alpha, construct validity using a chi-square test, and ceiling and floor effects by calculating the proportion of patients who achieved the minimum or maximum scores. Reliability was excellent with an ICC of 0.99 (95% CI=0.98-0.99). SEM was 1.44 points (1.5% of the total score), the MDD 90 was 3.34 points (3.5% of the total score) and the Cronbach's alpha was 0.93 (alpha if item deleted ranging from 0.94 to 0.94). An association was observed between patients classified by HAP and by IPAQ-6 (χ 2 =3.38; p=0.18). Ceiling or floor effects were not observed. The HAP shows adequate measurement properties for the assessment of the physical activity/inactivity level in hospitalized patients. Copyright © 2017 Associação Brasileira de Pesquisa e Pós-Graduação em Fisioterapia. Publicado por Elsevier Editora Ltda. All rights reserved.
Steiger-Ronay, Valerie; Tektas, Sibel; Attin, Thomas; Lussi, Adrian; Becker, Klaus; Wiedemeier, Daniel B; Beyeler, Barbara; Carvalho, Thiago S
2018-06-07
The aim of this in vitro study was to investigate the impact of saliva on the abrasion of eroded enamel using two measuring methods. A total of 80 bovine enamel specimens from 20 bovine incisors were allocated to four experimental groups (n = 20 specimens per group). After baseline surface microhardness (SMH) measurements and profilometry all specimens were subjected to erosion (2 min, 1% citric acid, pH: 3.6, 37°C). SMH was determined again, and the depths of the Knoop indentations were calculated. Thereafter, specimens were incubated in human saliva (group 1 - no incubation/control, group 2 - 0.5 h, group 3 - 1 h, group 4 - 2 h) before toothbrush abrasion was performed. After final SMH measurements and profilometry, indentations were remeasured, and surface loss was calculated. SMH did not return to baseline values regardless of the length of saliva incubation. Further, an irreversible substance loss was observed for all specimens. With the indentation method, significantly (p < 0.05) more substance loss was found for controls (least square means ± standard error of 198 ± 19 nm) than for groups 2-4 (110 ± 10, 114 ± 11, and 105 ± 14 nm). Profilometric assessment showed significantly more substance loss for controls (122 ± 8 nm) than for group 4 (106 ± 5 nm). Intraclass correlation for interrater reliability between measurement methods was low (0.21, CI: 0.1-0.3), indicating poor agreement. Exposure of eroded enamel to saliva for up to 2 h could not re-establish the original SMH. The amount of measured substance loss depended on the measurement method applied. © 2018 S. Karger AG, Basel.
Vasudevan, Balamurali; Jin, Zi Bing; Ciuffreda, Kenneth J.; Jhanji, Vishal; Zhou, Hong Jia; Wang, Ning Li; Liang, Yuan Bo
2015-01-01
Purpose To investigate the association between maternal reproductive age and their children’ refractive error progression in Chinese urban students. Methods The Beijing Myopia Progression Study was a three-year cohort investigation. Cycloplegic refraction of these students at both baseline and follow-up vision examinations, as well as non-cycloplegic refraction of their parents at baseline, were performed. Student’s refractive change was defined as the cycloplegic spherical equivalent (SE) of the right eye at the final follow-up minus the cycloplegic SE of the right eye at baseline. Results At the final follow-up, 241 students (62.4%) were reexamined. 226 students (58.5%) with completed refractive data, as well as completed parental reproductive age data, were enrolled. The average paternal and maternal age increased from 29.4 years and 27.5 years in 1993–1994 to 32.6 years and 29.2 years in 2003–2004, respectively. In the multivariate analysis, students who were younger (β = 0.08 diopter/year/year, P<0.001), with more myopic refraction at baseline (β = 0.02 diopter/year/diopter, P = 0.01), and with older maternal reproductive age (β = -0.18 diopter/year/decade, P = 0.01), had more myopic refractive change. After stratifying the parental reproductive age into quartile groups, children with older maternal reproductive age (trend test: P = 0.04) had more myopic refractive change, after adjusting for the children's age, baseline refraction, maternal refraction, and near work time. However, no significant association between myopic refractive change and paternal reproductive age was found. Conclusions In this cohort, children with older maternal reproductive age had more myopic refractive change. This new risk factor for myopia progression may partially explain the faster myopic progression found in the Chinese population in recent decades. PMID:26421841
Safety Performance of Airborne Separation: Preliminary Baseline Testing
NASA Technical Reports Server (NTRS)
Consiglio, Maria C.; Hoadley, Sherwood T.; Wing, David J.; Baxley, Brian T.
2007-01-01
The Safety Performance of Airborne Separation (SPAS) study is a suite of Monte Carlo simulation experiments designed to analyze and quantify safety behavior of airborne separation. This paper presents results of preliminary baseline testing. The preliminary baseline scenario is designed to be very challenging, consisting of randomized routes in generic high-density airspace in which all aircraft are constrained to the same flight level. Sustained traffic density is varied from approximately 3 to 15 aircraft per 10,000 square miles, approximating up to about 5 times today s traffic density in a typical sector. Research at high traffic densities and at multiple flight levels are planned within the next two years. Basic safety metrics for aircraft separation are collected and analyzed. During the progression of experiments, various errors, uncertainties, delays, and other variables potentially impacting system safety will be incrementally introduced to analyze the effect on safety of the individual factors as well as their interaction and collective effect. In this paper we report the results of the first experiment that addresses the preliminary baseline condition tested over a range of traffic densities. Early results at five times the typical traffic density in today s NAS indicate that, under the assumptions of this study, airborne separation can be safely performed. In addition, we report on initial observations from an exploration of four additional factors tested at a single traffic density: broadcast surveillance signal interference, extent of intent sharing, pilot delay, and wind prediction error.
Thermafil: A New Clinical Approach Due to New Dimensional Evaluations
Vittoria, G.; Pantaleo, G.; Blasi, A.; Spagnuolo, G.; Iandolo, A.; Amato, M.
2018-01-01
Background: There are a lot of techniques to obturate the root canals, but lateral condensation of gutta-percha is the most used one. An important aspect of thermafil is the error margin tolerated by the manufacturer in the production of plastic carriers. In literature, there is no evidence about discrepancy percentage between different carriers. It is demonstrated that the error margin of gutta-percha is 0.5% and is 0.2% for metal files (ISO standards). Objective: The aim of this study was to evaluate the real dimensions of thermafil plastic carriers observed by the stereo microscope measuring the dimensional discrepancy between them. Methods: For this study, 80 new thermafil (Dentsply Maillefer) have been selected. 40 thermafil 0.25 and 40 thermafil 0.30. Through 60X stereo microscope, the dimensions of the plastic carrier tips have been measured. The dimensions of the plastic carrier were also measured after a heating cycle. ZL GAL 11TUSM (Zetaline stereo evolution) microscope was used to observe the samples. Measurements were made through a dedicated software (Image Focus). All samples were analysed at 60X. Results: A non-parametric paired test (Wilcoxon test) was used to compare baseline and after heating values; p-values ≤ 0.05 were assumed to be statistically significant. Conclusion: The samples we measured showed a mean value of the diameters in Thermafil 25 that was 0.27 mm, for Thermafil 30 the mean value was 0.33 mm. We have measured a dimensional variable of 8% in the 25 group while in group 30 the maximum possible variation found was 4%, that’s why we propose a new protocol of obturation with thermafil. We can also conclude that a single heating process does not affect clinically the plastic carrier dimensions. PMID:29541263
Sustained acceleration on perception of relative position and motion.
McKinley, R Andrew; Tripp, Lloyd D; Fullerton, Kathy L; Goodyear, Chuck
2013-03-01
Air-to-air refueling, formation flying, and projectile countermeasures all rely on a pilot's ability to be aware of his position and motion relative to another object. Eight subjects participated in the study, all members of the sustained acceleration stress panel at Wright-Patterson AFB, OH. The task consisted of the subject performing a two-dimensional join up task between a KC-135 tanker and an F-16. The objective was to guide the nose of the F-16 to the posterior end of the boom extended from the tanker, and hold this position for 2 s. If the F-16 went past the tanker, or misaligned with the tanker, it would be recorded as an error. These tasks were performed during four G(z) acceleration profiles starting from a baseline acceleration of 1.5 G(z). The plateaus were 3, 5, and 7 G(z). The final acceleration exposure was a simulated aerial combat maneuver (SACM). One subject was an outlier and therefore omitted from analysis. The mean capture time and percent error data were recorded and compared separately. There was a significant difference in error percentage change from baseline among the G(z) profiles, but not capture time. Mean errors were approximately 15% higher in the 7 G profile and 10% higher during the SACM. This experiment suggests that the ability to accurately perceive the motion of objects relative to other objects is impeded at acceleration levels of 7 G(z) or higher.
Othman, Ahmed A; Nothaft, Wolfram; Awni, Walid M; Dutta, Sandeep
2013-01-01
Aim To characterize quantitatively the relationship between ABT-102, a potent and selective TRPV1 antagonist, exposure and its effects on body temperature in humans using a population pharmacokinetic/pharmacodynamic modelling approach. Methods Serial pharmacokinetic and body temperature (oral or core) measurements from three double-blind, randomized, placebo-controlled studies [single dose (2, 6, 18, 30 and 40 mg, solution formulation), multiple dose (2, 4 and 8 mg twice daily for 7 days, solution formulation) and multiple-dose (1, 2 and 4 mg twice daily for 7 days, solid dispersion formulation)] were analyzed. nonmem was used for model development and the model building steps were guided by pre-specified diagnostic and statistical criteria. The final model was qualified using non-parametric bootstrap and visual predictive check. Results The developed body temperature model included additive components of baseline, circadian rhythm (cosine function of time) and ABT-102 effect (Emax function of plasma concentration) with tolerance development (decrease in ABT-102 Emax over time). Type of body temperature measurement (oral vs. core) was included as a fixed effect on baseline, amplitude of circadian rhythm and residual error. The model estimates (95% bootstrap confidence interval) were: baseline oral body temperature, 36.3 (36.3, 36.4)°C; baseline core body temperature, 37.0 (37.0, 37.1)°C; oral circadian amplitude, 0.25 (0.22, 0.28)°C; core circadian amplitude, 0.31 (0.28, 0.34)°C; circadian phase shift, 7.6 (7.3, 7.9) h; ABT-102 Emax, 2.2 (1.9, 2.7)°C; ABT-102 EC50, 20 (15, 28) ng ml−1; tolerance T50, 28 (20, 43) h. Conclusions At exposures predicted to exert analgesic activity in humans, the effect of ABT-102 on body temperature is estimated to be 0.6 to 0.8°C. This effect attenuates within 2 to 3 days of dosing. PMID:22966986
Tsehaie, J; Poot, D H J; Oei, E H G; Verhaar, J A N; de Vos, R J
2017-07-01
To evaluate whether baseline MRI parameters provide prognostic value for clinical outcome, and to study correlation between MRI parameters and clinical outcome. Observational prospective cohort study. Patients with chronic midportion Achilles tendinopathy were included and performed a 16-week eccentric calf-muscle exercise program. Outcome measurements were the validated Victorian Institute of Sports Assessment-Achilles (VISA-A) questionnaire and MRI parameters at baseline and after 24 weeks. The following MRI parameters were assessed: tendon volume (Volume), tendon maximum cross-sectional area (CSA), tendon maximum anterior-posterior diameter (AP), and signal intensity (SI). Intra-class correlation coefficients (ICCs) and minimum detectable changes (MDCs) for each parameter were established in a reliability analysis. Twenty-five patients were included and complete follow-up was achieved in 20 patients. The average VISA-A scores increased significantly with 12.3 points (27.6%). The reliability was fair-good for all MRI-parameters with ICCs>0.50. Average tendon volume and CSA decreased significantly with 0.28cm 3 (5.2%) and 4.52mm 2 (4.6%) respectively. Other MRI parameters did not change significantly. None of the baseline MRI parameters were univariately associated with VISA-A change after 24 weeks. MRI SI increase over 24 weeks was positively correlated with the VISA-A score improvement (B=0.7, R 2 =0.490, p=0.02). Tendon volume and CSA decreased significantly after 24 weeks of conservative treatment. As these differences were within the MDC limits, they could be a result of a measurement error. Furthermore, MRI parameters at baseline did not predict the change in symptoms, and therefore have no added value in providing a prognosis in daily clinical practice. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhong, Xuemin; Liu, Hongqi; Mao, Xinyong; Li, Bin; He, Songping; Peng, Fangyu
2018-05-01
Large multi-axis propeller-measuring machines have two types of geometric error, position-independent geometric errors (PIGEs) and position-dependent geometric errors (PDGEs), which both have significant effects on the volumetric error of the measuring tool relative to the worktable. This paper focuses on modeling, identifying and compensating for the volumetric error of the measuring machine. A volumetric error model in the base coordinate system is established based on screw theory considering all the geometric errors. In order to fully identify all the geometric error parameters, a new method for systematic measurement and identification is proposed. All the PIGEs of adjacent axes and the six PDGEs of the linear axes are identified with a laser tracker using the proposed model. Finally, a volumetric error compensation strategy is presented and an inverse kinematic solution for compensation is proposed. The final measuring and compensation experiments have further verified the efficiency and effectiveness of the measuring and identification method, indicating that the method can be used in volumetric error compensation for large machine tools.
Automated absolute phase retrieval in across-track interferometry
NASA Technical Reports Server (NTRS)
Madsen, Soren N.; Zebker, Howard A.
1992-01-01
Discussed is a key element in the processing of topographic radar maps acquired by the NASA/JPL airborne synthetic aperture radar configured as an across-track interferometer (TOPSAR). TOPSAR utilizes a single transmit and two receive antennas; the three-dimensional target location is determined by triangulation based on a known baseline and two measured slant ranges. The slant range difference is determined very accurately from the phase difference between the signals received by the two antennas. This phase is measured modulo 2pi, whereas it is the absolute phase which relates directly to the difference in slant range. It is shown that splitting the range bandwidth into two subbands in the processor and processing each individually allows for the absolute phase. The underlying principles and system errors which must be considered are discussed, together with the implementation and results from processing data acquired during the summer of 1991.
Do compensation processes impair mental health? A meta-analysis.
Elbers, Nieke A; Hulst, Liesbeth; Cuijpers, Pim; Akkermans, Arno J; Bruinvels, David J
2013-05-01
Victims who are involved in a compensation processes generally have more health complaints compared to victims who are not involved in a compensation process. Previous research regarding the effect of compensation processes has concentrated on the effect on physical health. This meta-analysis focuses on the effect of compensation processes on mental health. Prospective cohort studies addressing compensation and mental health after traffic accidents, occupational accidents or medical errors were identified using PubMed, EMBASE, PsycInfo, CINAHL, and the Cochrane Library. Relevant studies published between January 1966 and 10 June 2011 were selected for inclusion. Ten studies were included. The first finding was that the compensation group already had higher mental health complaints at baseline compared to the non-compensation group (standardised mean difference (SMD)=-0.38; 95% confidence interval (CI) -0.66 to -0.10; p=.01). The second finding was that mental health between baseline and post measurement improved less in the compensation group compared to the non-compensation group (SMD=-0.35; 95% CI -0.70 to -0.01; p=.05). However, the quality of evidence was limited, mainly because of low quality study design and heterogeneity. Being involved in a compensation process is associated with higher mental health complaints but three-quarters of the difference appeared to be already present at baseline. The findings of this study should be interpreted with caution because of the limited quality of evidence. The difference at baseline may be explained by a selection bias or more anger and blame about the accident in the compensation group. The difference between baseline and follow-up may be explained by secondary gain and secondary victimisation. Future research should involve assessment of exposure to compensation processes, should analyse and correct for baseline differences, and could examine the effect of time, compensation scheme design, and claim settlement on (mental) health. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Ranaudo, R. J.; Batterson, J. G.; Reehorst, A. L.; Bond, T. H.; Omara, T. M.
1989-01-01
A flight test was performed with the NASA Lewis Research Center's DH-6 icing research aircraft. The purpose was to employ a flight test procedure and data analysis method, to determine the accuracy with which the effects of ice on aircraft stability and control could be measured. For simplicity, flight testing was restricted to the short period longitudinal mode. Two flights were flown in a clean (baseline) configuration, and two flights were flown with simulated horizontal tail ice. Forty-five repeat doublet maneuvers were performed in each of four test configurations, at a given trim speed, to determine the ensemble variation of the estimated stability and control derivatives. Additional maneuvers were also performed in each configuration, to determine the variation in the longitudinal derivative estimates over a wide range of trim speeds. Stability and control derivatives were estimated by a Modified Stepwise Regression (MSR) technique. A measure of the confidence in the derivative estimates was obtained by comparing the standard error for the ensemble of repeat maneuvers, to the average of the estimated standard errors predicted by the MSR program. A multiplicative relationship was determined between the ensemble standard error, and the averaged program standard errors. In addition, a 95 percent confidence interval analysis was performed for the elevator effectiveness estimates, C sub m sub delta e. This analysis identified the speed range where changes in C sub m sub delta e could be attributed to icing effects. The magnitude of icing effects on the derivative estimates were strongly dependent on flight speed and aircraft wing flap configuration. With wing flaps up, the estimated derivatives were degraded most at lower speeds corresponding to that configuration. With wing flaps extended to 10 degrees, the estimated derivatives were degraded most at the higher corresponding speeds. The effects of icing on the changes in longitudinal stability and control derivatives were adequately determined by the flight test procedure and the MSR analysis method discussed herein.
Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stynes, J. K.; Ihas, B.
2012-04-01
The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of themore » absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.« less
NASA Astrophysics Data System (ADS)
Chester, A.; Ball, G. C.; Caballero-Folch, R.; Cross, D. S.; Cruz, S.; Domingo, T.; Drake, T. E.; Garnsworthy, A. B.; Hackman, G.; Hallam, S.; Henderson, J.; Henderson, R.; Korten, W.; Krücken, R.; Moukaddam, M.; Olaizola, B.; Ruotsalainen, P.; Smallcombe, J.; Starosta, K.; Svensson, C. E.; Williams, J.; Wimmer, K.
2017-07-01
A high precision lifetime measurement of the 21+ state in 94Sr was performed at TRIUMF's ISAC-II facility by coupling the recoil distance method implemented via the TIGRESS integrated plunger with unsafe Coulomb excitation in inverse kinematics. Due to limited statistics imposed by the use of a radioactive 94Sr beam, a likelihood ratio χ2 method was derived and used to compare experimental data to Geant4 simulations. The B (E 2 ;21+→01+) value extracted from the lifetime measurement of 7 .80-0.40+0.50(stat.)±0.07 (sys.) ps is approximately 25% larger than previously reported while the relative error has been reduced by a factor of approximately 8. A baseline deformation has been established for Sr isotopes with N ≤58 which is a necessary condition for the quantum phase transition interpretation of the onset of deformation in this region. A comparison to existing theoretical models is presented.
Using Ground-Based Measurements and Retrievals to Validate Satellite Data
NASA Technical Reports Server (NTRS)
Dong, Xiquan
2002-01-01
The proposed research is to use the DOE ARM ground-based measurements and retrievals as the ground-truth references for validating satellite cloud results and retrieving algorithms. This validation effort includes four different ways: (1) cloud properties on different satellites, therefore different sensors, TRMM VIRS and TERRA MODIS; (2) cloud properties at different climatic regions, such as DOE ARM SGP, NSA, and TWP sites; (3) different cloud types, low and high level cloud properties; and (4) day and night retrieving algorithms. Validation of satellite-retrieved cloud properties is very difficult and a long-term effort because of significant spatial and temporal differences between the surface and satellite observing platforms. The ground-based measurements and retrievals, only carefully analyzed and validated, can provide a baseline for estimating errors in the satellite products. Even though the validation effort is so difficult, a significant progress has been made during the proposed study period, and the major accomplishments are summarized in the follow.
Mismeasurement and the resonance of strong confounders: correlated errors.
Marshall, J R; Hastrup, J L; Ross, J S
1999-07-01
Confounding in epidemiology, and the limits of standard methods of control for an imperfectly measured confounder, have been understood for some time. However, most treatments of this problem are based on the assumption that errors of measurement in confounding and confounded variables are independent. This paper considers the situation in which a strong risk factor (confounder) and an inconsequential but suspected risk factor (confounded) are each measured with errors that are correlated; the situation appears especially likely to occur in the field of nutritional epidemiology. Error correlation appears to add little to measurement error as a source of bias in estimating the impact of a strong risk factor: it can add to, diminish, or reverse the bias induced by measurement error in estimating the impact of the inconsequential risk factor. Correlation of measurement errors can add to the difficulty involved in evaluating structures in which confounding and measurement error are present. In its presence, observed correlations among risk factors can be greater than, less than, or even opposite to the true correlations. Interpretation of multivariate epidemiologic structures in which confounding is likely requires evaluation of measurement error structures, including correlations among measurement errors.
Measuring upper limb function in children with hemiparesis with 3D inertial sensors.
Newman, Christopher J; Bruchez, Roselyn; Roches, Sylvie; Jequier Gygax, Marine; Duc, Cyntia; Dadashi, Farzin; Massé, Fabien; Aminian, Kamiar
2017-12-01
Upper limb assessments in children with hemiparesis rely on clinical measurements, which despite standardization are prone to error. Recently, 3D movement analysis using optoelectronic setups has been used to measure upper limb movement, but generalization is hindered by time and cost. Body worn inertial sensors may provide a simple, cost-effective alternative. We instrumented a subset of 30 participants in a mirror therapy clinical trial at baseline, post-treatment, and follow-up clinical assessments, with wireless inertial sensors positioned on the arms and trunk to monitor motion during reaching tasks. Inertial sensor measurements distinguished paretic and non-paretic limbs with significant differences (P < 0.01) in movement duration, power, range of angular velocity, elevation, and smoothness (normalized jerk index and spectral arc length). Inertial sensor measurements correlated with functional clinical tests (Melbourne Assessment 2); movement duration and complexity (Higuchi fractal dimension) showed moderate to strong negative correlations with clinical measures of amplitude, accuracy, and fluency. Inertial sensor measurements reliably identify paresis and correlate with clinical measurements; they can therefore provide a complementary dimension of assessment in clinical practice and during clinical trials aimed at improving upper limb function.
Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E
2011-06-22
Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.
NASA Astrophysics Data System (ADS)
Wielgosz, Agata; Tercjak, Monika; Brzeziński, Aleksander
2016-06-01
Very Long Baseline Interferometry (VLBI) is the only space geodetic technique capable to realise the Celestial Reference Frame and tie it with the Terrestrial Reference Frame. It is also the only technique, which measures all the Earth Orientation Parameters (EOP) on a regular basis, thus the role of VLBI in determination of the universal time, nutation and polar motion and station coordinates is invaluable. Although geodetic VLBI has been providing observations for more than 30 years, there are no clear guidelines how to deal with the stations or baselines having significantly bigger post-fit residuals than the other ones. In our work we compare the common weighting strategy, using squared formal errors, with strategies involving exclusion or down-weighting of stations or baselines. For that purpose we apply the Vienna VLBI Software VieVS with necessary additional procedures. In our analysis we focus on statistical indicators that might be the criterion of excluding or down-weighting the inferior stations or baselines, as well as on the influence of adopted strategy on the EOP and station coordinates estimation. Our analysis shows that in about 99% of 24-hour VLBI sessions there is no need to exclude any data as the down-weighting procedure is sufficiently efficient. Although results presented here do not clearly indicate the best algorithm, they show strengths and weaknesses of the applied methods and point some limitations of automatic analysis of VLBI data. Moreover, it is also shown that the influence of the adopted weighting strategy is not always clearly reflected in the results of analysis.
Satellite-based Calibration of Heat Flux at the Ocean Surface
NASA Astrophysics Data System (ADS)
Barron, C. N.; Dastugue, J. M.; May, J. C.; Rowley, C. D.; Smith, S. R.; Spence, P. L.; Gremes-Cordero, S.
2016-02-01
Model forecasts of upper ocean heat content and variability on diurnal to daily scales are highly dependent on estimates of heat flux through the air-sea interface. Satellite remote sensing is applied to not only inform the initial ocean state but also to mitigate errors in surface heat flux and model representations affecting the distribution of heat in the upper ocean. Traditional assimilation of sea surface temperature (SST) observations re-centers ocean models at the start of each forecast cycle. Subsequent evolution depends on estimates of surface heat fluxes and upper-ocean processes over the forecast period. The COFFEE project (Calibration of Ocean Forcing with satellite Flux Estimates) endeavors to correct ocean forecast bias through a responsive error partition among surface heat flux and ocean dynamics sources. A suite of experiments in the southern California Current demonstrates a range of COFFEE capabilities, showing the impact on forecast error relative to a baseline three-dimensional variational (3DVAR) assimilation using Navy operational global or regional atmospheric forcing. COFFEE addresses satellite-calibration of surface fluxes to estimate surface error covariances and links these to the ocean interior. Experiment cases combine different levels of flux calibration with different assimilation alternatives. The cases may use the original fluxes, apply full satellite corrections during the forecast period, or extend hindcast corrections into the forecast period. Assimilation is either baseline 3DVAR or standard strong-constraint 4DVAR, with work proceeding to add a 4DVAR expanded to include a weak constraint treatment of the surface flux errors. Covariance of flux errors is estimated from the recent time series of forecast and calibrated flux terms. While the California Current examples are shown, the approach is equally applicable to other regions. These approaches within a 3DVAR application are anticipated to be useful for global and larger regional domains where a full 4DVAR methodology may be cost-prohibitive.
3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models
Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Cai, Weixing; Rottmann, Joerg; Li, Ruijiang; Williams, Christopher; Wagar, Matthew; Berbeco, Ross; Ionascu, Dan; Lewis, John H.
2015-01-01
3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we develop and perform initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and use these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparing to ground truth digital and physical phantom images. The performance of 4DCBCT- and 4DCT- based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms, and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. PMID:25905722
Analysis on the dynamic error for optoelectronic scanning coordinate measurement network
NASA Astrophysics Data System (ADS)
Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie
2018-01-01
Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.
Jaacks, Lindsay M.; Crandell, Jamie; Liese, Angela D.; Lamichhane, Archana P.; Bell, Ronny A.; Dabelea, Dana; D'Agostino, Ralph B.; Dolan, Lawrence M.; Marcovina, Santica; Reynolds, Kristi; Shah, Amy S.; Urbina, Elaine M.; Wadwa, R. Paul; Mayer-Davis, Elizabeth J.
2014-01-01
Aim To examine the association of dietary fiber intake with inflammation and arterial stiffness among youth with type 1 diabetes (T1D) in the US. Methods Data are from youth ≥ 10 years old with clinically diagnosed T1D for ≥ 3 months and ≥ 1 positive diabetes autoantibody in the SEARCH for Diabetes in Youth Study. Fiber intake was assessed by food frequency questionnaire with measurement error (ME) accounted for by structural sub-models derived using additional 24-hour dietary recall data in a calibration sample and the respective exposure-disease model covariates. Markers of inflammation, measured at baseline, included IL-6 (n=1405), CRP (n=1387), and fibrinogen (n=1340); markers of arterial stiffness, measured approximately 19 months post-baseline, were available in a subset of participants and included augmentation index (n=180), pulse wave velocity (n=184), and brachial distensibility (n=177). Results Mean (SD) T1D duration was 47.9 (43.2) months; 12.5% of participants were obese. Mean (SD) ME-adjusted fiber intake was 15 (2.8) g/day. In multivariable analyses, fiber intake was not associated with inflammation or arterial stiffness. Conclusion Among youth with T1D, fiber intake does not meet recommendations and is not associated with measures of systemic inflammation or vascular stiffness. Further research is needed to evaluate whether fiber is associated with these outcomes in older individuals with T1D or among individuals with higher intakes than those observed in the present study. PMID:24613131
Relationships of Measurement Error and Prediction Error in Observed-Score Regression
ERIC Educational Resources Information Center
Moses, Tim
2012-01-01
The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…
Turbulent Extreme Event Simulations for Lidar-Assisted Wind Turbine Control
NASA Astrophysics Data System (ADS)
Schlipf, David; Raach, Steffen
2016-09-01
This work presents a wind field generator which allows to shape wind fields in the time domain while maintaining the spectral properties. This is done by an iterative generation of wind fields and by minimizing the error between wind characteristics of the generated wind fields and desired values. The method leads towards realistic ultimate load calculations for lidar-assisted control. This is demonstrated by fitting a turbulent wind field to an Extreme Operating Gust. The wind field is then used to compare a baseline feedback controller alone against a combined feedback and feedforward controller using simulated lidar measurements. The comparison confirms that the lidar-assisted controller is still able to significantly reduce the ultimate loads on the tower base under this more realistic conditions.
Confabulation: What is associated with its rise and fall? A study in brain injury.
Bajo, Ana; Fleminger, Simon; Metcalfe, Chris; Kopelman, Michael D
2017-02-01
The aim of this study was to investigate cognitive and emotional factors associated with the presence and clinical course of confabulation. 24 confabulating participants were compared with 11 brain injured and 6 healthy controls on measures of temporal context confusions (TCC), mood state (elation, depression) and lack of insight. Measures of autobiographical memory and executive function were also available. Changes in confabulation and these other measures were monitored over 9 months in the confabulating group. We found that TCC were more common in confabulating patients than in healthy controls, and that the decline in these errors paralleled the recovery from confabulation. However, TCC were not specific to the presence of confabulation in brain injury; and their decline was not correlated with change in confabulation scores over 9 months. We found that elated mood and lack of insight discriminated between confabulating and non-confabulating patients, but these measures did not correlate with either the severity of confabulation or change in confabulation scores through time. What seems to have been most strongly associated with the severity of confabulation scores at 'baseline' and changes through time (over 9 months) were the severity of memory impairment (especially on autobiographical memory) and errors on executive tests (particularly in making cognitive estimates). Greater autobiographical memory and executive impairment were associated with more severe confabulation. The findings were consistent with the view that confabulation results from executive dysfunction where autobiographical memory is also impaired; and that it resolves as these impairments subside. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mitchell, C.; Hu, C.; Bowler, B.; Drapeau, D.; Balch, W. M.
2017-11-01
A new algorithm for estimating particulate inorganic carbon (PIC) concentrations from ocean color measurements is presented. PIC plays an important role in the global carbon cycle through the oceanic carbonate pump, therefore accurate estimations of PIC concentrations from satellite remote sensing are crucial for observing changes on a global scale. An extensive global data set was created from field and satellite observations for investigating the relationship between PIC concentrations and differences in the remote sensing reflectance (Rrs) at green, red, and near-infrared (NIR) wavebands. Three color indices were defined: two as the relative height of Rrs(667) above a baseline running between Rrs(547) and an Rrs in the NIR (either 748 or 869 nm), and one as the difference between Rrs(547) and Rrs(667). All three color indices were found to explain over 90% of the variance in field-measured PIC. But, due to the lack of availability of Rrs(NIR) in the standard ocean color data products, most of the further analysis presented here was done using the color index determined from only two bands. The new two-band color index algorithm was found to retrieve PIC concentrations more accurately than the current standard algorithm used in generating global PIC data products. Application of the new algorithm to satellite imagery showed patterns on the global scale as revealed from field measurements. The new algorithm was more resistant to atmospheric correction errors and residual errors in sun glint corrections, as seen by a reduction in the speckling and patchiness in the satellite-derived PIC images.
Measurement error is often neglected in medical literature: a systematic review.
Brakenhoff, Timo B; Mitroiu, Marian; Keogh, Ruth H; Moons, Karel G M; Groenwold, Rolf H H; van Smeden, Maarten
2018-06-01
In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary. Copyright © 2018 Elsevier Inc. All rights reserved.
Referenceless MR thermometry-a comparison of five methods.
Zou, Chao; Tie, Changjun; Pan, Min; Wan, Qian; Liang, Changhong; Liu, Xin; Chung, Yiu-Cho
2017-01-07
Proton resonance frequency shift (PRFS) MR thermometry is commonly used to measure temperature in thermotherapy. The method requires a baseline temperature map and is therefore motion sensitive. Several referenceless MR thermometry methods were proposed to address this problem but their performances have never been compared. This study compared the performance of five referenceless methods through simulation, heating of ex vivo tissues and in vivo imaging of the brain and liver of healthy volunteers. Mean, standard deviation, root mean square, 2/98 percentiles of error were used as performance metrics. Probability density functions (PDF) of the error distribution for these methods in the different tests were also compared. The results showed that the phase gradient method (PG) exhibited largest error in all scenarios. The original method (ORG) and the complex field estimation method (CFE) had similar performance in all experiments. The phase finite difference method (PFD) and the near harmonic method (NH) were better than other methods, especially in the lower signal-to-noise ratio (SNR) and fast changing field cases. Except for PG, the PDFs of each method were very similar among the different experiments. Since phase unwrapping in ORG and NH is computationally demanding and subject to image SNR, PFD and CFE would be good choices as they do not need phase unwrapping. The results here would facilitate the choice of appropriate referenceless methods in various MR thermometry applications.
NASA Astrophysics Data System (ADS)
Zhao, Chaoying; Qu, Feifei; Zhang, Qin; Zhu, Wu
2012-10-01
The accuracy of DEM generated with interferometric synthetic aperture radar (InSAR) technique mostly depends on phase unwrapping errors, atmospheric effects, baseline errors and phase noise. The first term is more serious if the high-resolution TerraSAR-X data over urban regions and mountainous regions are applied. In addition, the deformation effect cannot be neglected if the study regions are suffering from surface deformation within the SAR acquisition dates. In this paper, several measures have been taken to generate high resolution DEM over urban regions and mountainous regions with TerraSAR data. The SAR interferometric pairs are divided into two subsets: (a) DEM subsets and (b) deformation subsets. These two interferometric sets serve to generate DEM and deformation, respectively. The external DEM is applied to assist the phase unwrapping with "remove-restore" procedure. The deformation phase is re-scaled and subtracted from each DEM observations. Lastly, the stochastic errors including atmospheric effects and phase noise are suppressed by averaging heights from several interferograms with weights. Six TerraSAR-X data are applied to generate a 6-m-resolution DEM over Xi'an, China using these procedures. Both discrete GPS heights and local high resolution and high precision DEM data are applied to calibrate the DEM generated with our algorithm, and around 4.1 m precision is achieved.
Referenceless MR thermometry—a comparison of five methods
NASA Astrophysics Data System (ADS)
Zou, Chao; Tie, Changjun; Pan, Min; Wan, Qian; Liang, Changhong; Liu, Xin; Chung, Yiu-Cho
2017-01-01
Proton resonance frequency shift (PRFS) MR thermometry is commonly used to measure temperature in thermotherapy. The method requires a baseline temperature map and is therefore motion sensitive. Several referenceless MR thermometry methods were proposed to address this problem but their performances have never been compared. This study compared the performance of five referenceless methods through simulation, heating of ex vivo tissues and in vivo imaging of the brain and liver of healthy volunteers. Mean, standard deviation, root mean square, 2/98 percentiles of error were used as performance metrics. Probability density functions (PDF) of the error distribution for these methods in the different tests were also compared. The results showed that the phase gradient method (PG) exhibited largest error in all scenarios. The original method (ORG) and the complex field estimation method (CFE) had similar performance in all experiments. The phase finite difference method (PFD) and the near harmonic method (NH) were better than other methods, especially in the lower signal-to-noise ratio (SNR) and fast changing field cases. Except for PG, the PDFs of each method were very similar among the different experiments. Since phase unwrapping in ORG and NH is computationally demanding and subject to image SNR, PFD and CFE would be good choices as they do not need phase unwrapping. The results here would facilitate the choice of appropriate referenceless methods in various MR thermometry applications.
Dooley, Erin E; Golaszewski, Natalie M
2017-01-01
Background Physical activity tracking wearable devices have emerged as an increasingly popular method for consumers to assess their daily activity and calories expended. However, whether these wearable devices are valid at different levels of exercise intensity is unknown. Objective The objective of this study was to examine heart rate (HR) and energy expenditure (EE) validity of 3 popular wrist-worn activity monitors at different exercise intensities. Methods A total of 62 participants (females: 58%, 36/62; nonwhite: 47% [13/62 Hispanic, 8/62 Asian, 7/62 black/ African American, 1/62 other]) wore the Apple Watch, Fitbit Charge HR, and Garmin Forerunner 225. Validity was assessed using 2 criterion devices: HR chest strap and a metabolic cart. Participants completed a 10-minute seated baseline assessment; separate 4-minute stages of light-, moderate-, and vigorous-intensity treadmill exercises; and a 10-minute seated recovery period. Data from devices were compared with each criterion via two-way repeated-measures analysis of variance and Bland-Altman analysis. Differences are expressed in mean absolute percentage error (MAPE). Results For the Apple Watch, HR MAPE was between 1.14% and 6.70%. HR was not significantly different at the start (P=.78), during baseline (P=.76), or vigorous intensity (P=.84); lower HR readings were measured during light intensity (P=.03), moderate intensity (P=.001), and recovery (P=.004). EE MAPE was between 14.07% and 210.84%. The device measured higher EE at all stages (P<.01). For the Fitbit device, the HR MAPE was between 2.38% and 16.99%. HR was not significantly different at the start (P=.67) or during moderate intensity (P=.34); lower HR readings were measured during baseline, vigorous intensity, and recovery (P<.001) and higher HR during light intensity (P<.001). EE MAPE was between 16.85% and 84.98%. The device measured higher EE at baseline (P=.003), light intensity (P<.001), and moderate intensity (P=.001). EE was not significantly different at vigorous (P=.70) or recovery (P=.10). For Garmin Forerunner 225, HR MAPE was between 7.87% and 24.38%. HR was not significantly different at vigorous intensity (P=.35). The device measured higher HR readings at start, baseline, light intensity, moderate intensity (P<.001), and recovery (P=.04). EE MAPE was between 30.77% and 155.05%. The device measured higher EE at all stages (P<.001). Conclusions This study provides one of the first validation assessments for the Fitbit Charge HR, Apple Watch, and Garmin Forerunner 225. An advantage and novel approach of the study is the examination of HR and EE at specific physical activity intensities. Establishing validity of wearable devices is of particular interest as these devices are being used in weight loss interventions and could impact findings. Future research should investigate why differences between exercise intensities and the devices exist. PMID:28302596
Nowroozzadeh, Mohammad Hosein; Mirhosseini, Amirhossein; Meshkibaf, Mohammad Hassan; Roshannejad, Javad
2012-03-01
Islamic Ramadan is the month of fasting, in which intake of food and drink is restricted from sunrise until sunset. The objective of the present study was to find out the effect of altered eating habits during Ramadan fasting on ocular refractive and biometric properties. In this prospective case series, 40 eyes of 22 healthy volunteers with a mean age of 60.55 ± 12.20 years were enrolled. Patients with any systemic disorder and eyes with pathology or previous surgery were excluded. One month before Ramadan (at 8.00 am), during Ramadan fasting (at 8.00 am and 4.00 pm) and one month later during the non-fasting period (at 8.00 am), ocular refractive and biometric characteristics were measured using an autokeratorefractometer (Auto-Kerato-Refractometer KR-8900; Topcon Co, Tokyo, Japan) and contact ultrasonic biometry (Nidek Echoscan US 800; Nidek Co, Tokyo, Japan). Anterior chamber depth was significantly increased during fasting compared with baseline measurements and returned to baseline one month after Ramadan (3.22 ± 0.07 mm and 4.33 ± 0.17 mm for non-fasting and fasting, respectively; p < 0.001). The anterior chamber depth measurements were significantly larger at 8.00 am during fasting compared with 4.00 pm (p = 0.01). Axial length was significantly decreased during fasting and returned to baseline one month after Ramadan (23.09 ± 0.14 mm and 22.65 ± 0.18 mm, for non-fasting and fasting, respectively; p < 0.001). Intraocular lens power calculations were significantly increased during fasting and returned to baseline one month after Ramadan (SRK-T formula: 21.46 ± 0.27 D and 22.92 ± 0.46 D, for non-fasting and fasting, respectively; p < 0.001). There were no significant differences in spherical equivalent, corneal astigmatism, mean keratometry and flatter and steeper corneal radii of curvature between time intervals. Ramadan fasting is associated with statistically significant alterations in anterior chamber depth and axial length that result in both statistically and clinically significant changes in intraocular lens power calculations. Therefore, relying on measurements taken during this month might lead to refractive errors after cataract surgery. © 2012 The Authors. Clinical and Experimental Optometry © 2012 Optometrists Association Australia.
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing
2018-01-15
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.
Differential transfer processes in incremental visuomotor adaptation.
Seidler, Rachel D
2005-01-01
Visuomotor adaptive processes were examined by testing transfer of adaptation between similar conditions. Participants made manual aiming movements with a joystick to hit targets on a computer screen, with real-time feedback display of their movement. They adapted to three different rotations of the display in a sequential fashion, with a return to baseline display conditions between rotations. Adaptation was better when participants had prior adaptive experiences. When performance was assessed using direction error (calculated at the time of peak velocity) and initial endpoint error (error before any overt corrective actions), transfer was greater when the final rotation reflected an addition of previously experienced rotations (adaptation order 30 degrees rotation, 15 degrees, 45 degrees) than when it was a subtraction of previously experienced conditions (adaptation order 45 degrees rotation, 15 degrees, 30 degrees). Transfer was equal regardless of adaptation order when performance was assessed with final endpoint error (error following any discrete, corrective actions). These results imply the existence of multiple independent processes in visuomotor adaptation.
Impact of shorter wavelengths on optical quality for laws
NASA Technical Reports Server (NTRS)
Wissinger, Alan B.; Noll, Robert J.; Tsacoyeanes, James G.; Tausanovitch, Jeanette R.
1993-01-01
This study explores parametrically as a function of wavelength the degrading effects of several common optical aberrations (defocus, astigmatism, wavefront tilts, etc.), using the heterodyne mixing efficiency factor as the merit function. A 60 cm diameter aperture beam expander with an expansion ratio of 15:1 and a primary mirror focal ratio of f/2 was designed for the study. An HDOS copyrighted analysis program determined the value of merit function for various optical misalignments. With sensitivities provided by the analysis, preliminary error budget and tolerance allocations were made for potential optical wavefront errors and boresight errors during laser shot transit time. These were compared with the baseline l.5 m CO2 LAWS and the optical fabrication state of the art (SOA) as characterized by the Hubble Space Telescope. Reducing wavelength and changing optical design resulted in optical quality tolerances within the SOA both at 2 and 1 micrometers. However, advanced sensing and control devices would be necessary to maintain on-orbit alignment. Optical tolerance for maintaining boresight stability would have to be tightened by a factor of 1.8 for a 2 micrometers system and by 3.6 for a 1 micrometers system relative to the baseline CO2 LAWS. Available SOA components could be used for operation at 2 micrometers but operation at 1 micrometers does not appear feasible.
Impact of shorter wavelengths on optical quality for laws
NASA Technical Reports Server (NTRS)
Wissinger, Alan B.; Noll, Robert J.; Tsacoyeanes, James G.; Tausanovitch, Jeanette R.
1993-01-01
This study explores parametrically as a function of wavelength the degrading effects of several common optical aberrations (defocus, astigmatism, wavefronttilts, etc.), using the heterodyne mixing efficiency factor as the merit function. A 60 cm diameter aperture beam expander with an expansion ratio of 15:1 and a primary mirror focal ratio of f/2 was designed for the study. An HDOS copyrighted analysis program determined the value of merit function for various optical misalignments. With sensitivities provided by the analysis, preliminary error budget and tolerance allocations were made for potential optical wavefront errors and boresight errors during laser shot transit time. These were compared with the baseline 1.5 m CO2 laws and the optical fabrication state of the art (SOA) as characterized by the Hubble Space Telescope. Reducing wavelength and changing optical design resulted in optical quality tolerances within the SOA both at 2 and 1 micrometer. However, advanced sensing and control devices would be necessary to be tightened by a factory of 1.8 for a 2 micrometer system and by 3.6 for a 1 micrometer system relative to the baseline CO2 LAWS. Available SOA components could be used for operation at 2 micrometers but operation at 1 micrometer does not appear feasible.
Elsaid, K; Truong, T; Monckeberg, M; McCarthy, H; Butera, J; Collins, C
2013-12-01
To evaluate the impact of electronic standardized chemotherapy templates on incidence and types of prescribing errors. A quasi-experimental interrupted time series with segmented regression. A 700-bed multidisciplinary tertiary care hospital with an ambulatory cancer center. A multidisciplinary team including oncology physicians, nurses, pharmacists and information technologists. Standardized, regimen-specific, chemotherapy prescribing forms were developed and implemented over a 32-month period. Trend of monthly prevented prescribing errors per 1000 chemotherapy doses during the pre-implementation phase (30 months), immediate change in the error rate from pre-implementation to implementation and trend of errors during the implementation phase. Errors were analyzed according to their types: errors in communication or transcription, errors in dosing calculation and errors in regimen frequency or treatment duration. Relative risk (RR) of errors in the post-implementation phase (28 months) compared with the pre-implementation phase was computed with 95% confidence interval (CI). Baseline monthly error rate was stable with 16.7 prevented errors per 1000 chemotherapy doses. A 30% reduction in prescribing errors was observed with initiating the intervention. With implementation, a negative change in the slope of prescribing errors was observed (coefficient = -0.338; 95% CI: -0.612 to -0.064). The estimated RR of transcription errors was 0.74; 95% CI (0.59-0.92). The estimated RR of dosing calculation errors was 0.06; 95% CI (0.03-0.10). The estimated RR of chemotherapy frequency/duration errors was 0.51; 95% CI (0.42-0.62). Implementing standardized chemotherapy-prescribing templates significantly reduced all types of prescribing errors and improved chemotherapy safety.
Visual error augmentation enhances learning in three dimensions.
Sharp, Ian; Huang, Felix; Patton, James
2011-09-02
Because recent preliminary evidence points to the use of Error augmentation (EA) for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed). Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation) when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
Error measuring system of rotary Inductosyn
NASA Astrophysics Data System (ADS)
Liu, Chengjun; Zou, Jibin; Fu, Xinghe
2008-10-01
The inductosyn is a kind of high-precision angle-position sensor. It has important applications in servo table, precision machine tool and other products. The precision of inductosyn is calibrated by its error. It's an important problem about the error measurement in the process of production and application of the inductosyn. At present, it mainly depends on the method of artificial measurement to obtain the error of inductosyn. Therefore, the disadvantages can't be ignored such as the high labour intensity of the operator, the occurrent error which is easy occurred and the poor repeatability, and so on. In order to solve these problems, a new automatic measurement method is put forward in this paper which based on a high precision optical dividing head. Error signal can be obtained by processing the output signal of inductosyn and optical dividing head precisely. When inductosyn rotating continuously, its zero position error can be measured dynamically, and zero error curves can be output automatically. The measuring and calculating errors caused by man-made factor can be overcome by this method, and it makes measuring process more quickly, exactly and reliably. Experiment proves that the accuracy of error measuring system is 1.1 arc-second (peak - peak value).
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2016-11-01
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
The impact of response measurement error on the analysis of designed experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
Habitable Exoplanet Imager Optical-Mechanical Design and Analysis
NASA Technical Reports Server (NTRS)
Gaskins, Jonathan; Stahl, H. Philip
2017-01-01
The Habitable Exoplanet Imager (HabEx) is a space telescope currently in development whose mission includes finding and spectroscopically characterizing exoplanets. Effective high-contrast imaging requires tight stability requirements of the mirrors to prevent issues such as line of sight and wavefront errors. PATRAN and NASTRAN were used to model updates in the design of the HabEx telescope and find how those updates affected stability. Most of the structural modifications increased first mode frequencies and improved line of sight errors. These studies will be used to help define the baseline HabEx telescope design.
Solar dynamic heat receiver thermal characteristics in low earth orbit
NASA Technical Reports Server (NTRS)
Wu, Y. C.; Roschke, E. J.; Birur, G. C.
1988-01-01
A simplified system model is under development for evaluating the thermal characteristics and thermal performance of a solar dynamic spacecraft energy system's heat receiver. Results based on baseline orbit, power system configuration, and operational conditions, are generated for three basic receiver concepts and three concentrator surface slope errors. Receiver thermal characteristics and thermal behavior in LEO conditions are presented. The configuration in which heat is directly transferred to the working fluid is noted to generate the best system and thermal characteristics. as well as the lowest performance degradation with increasing slope error.
Response cost, reinforcement, and children's Porteus Maze qualitative performance.
Neenan, D M; Routh, D K
1986-09-01
Sixty fourth-grade children were given two different series of the Porteus Maze Test. The first series was given as a baseline, and the second series was administered under one of four different experimental conditions: control, response cost, positive reinforcement, or negative verbal feedback. Response cost and positive reinforcement, but not negative verbal feedback, led to significant decreases in the number of all types of qualitative errors in relation to the control group. The reduction of nontargeted as well as targeted errors provides evidence for the generalized effects of response cost and positive reinforcement.
Day-to-day variability in spot urine protein-creatinine ratio measurements.
Naresh, Chetana N; Hayen, Andrew; Craig, Jonathan C; Chadban, Steven J
2012-10-01
Accurate measurement of proteinuria is important in the diagnosis and management of chronic kidney disease (CKD). The reference standard test, 24-hour urinary protein excretion, is inconvenient and vulnerable to collection errors. Spot urine protein-creatinine ratio (PCR) is a convenient alternative and is in widespread use. However, day-to-day variability in PCR measurements has not been evaluated. Prospective cohort study of day-to-day variability in spot urine PCR measurement. Clinically stable outpatients with CKD (n = 145) attending a university hospital CKD clinic in Australia between July 2007 and April 2010. Spot urine PCR. Spot PCR variability was assessed and repeatability limits were determined using fractional polynomials. Spot PCRs were measured from urine samples collected at 9:00 am on consecutive days and 24-hour urinary protein excretion was collected concurrently. Paired results were analyzed from 145 patients: median age, 56 years; 59% men; and median 24-hour urinary protein excretion, 0.7 (range, 0.06-35.7) g/d. Day-to-day variability was substantial and increased in absolute terms, but decreased in relative terms with increasing baseline PCR. For patients with a low baseline PCR (20 mg/mmol [177 mg/g]), a change greater than ±160% (repeatability limits, 0-52 mg/mmol [0-460 mg/g]) is required to indicate a real change in proteinuria status with 95% certainty, whereas for those with a high baseline PCR (200 mg/mmol [1,768 mg/g]), a change of ±50% (decrease to <100 mg/mmol [<884 mg/g] or increase to >300 mg/mmol [>2,652 mg/g]) represents significant change. These study results need to be replicated in other ethnic groups. Changes in PCR observed in patients with CKD, ranging from complete resolution to doubling of PCR values, could be due to inherent biological variation and may not indicate a change in disease status. This should be borne in mind when using PCR in the diagnosis and management of CKD. Copyright © 2012 National Kidney Foundation, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bao, Chuanchen; Li, Jiakun; Feng, Qibo; Zhang, Bin
2018-07-01
This paper introduces an error-compensation model for our measurement method to measure five motion errors of a rotary axis based on fibre laser collimation. The error-compensation model is established in a matrix form using the homogeneous coordinate transformation theory. The influences of the installation errors, error crosstalk, and manufacturing errors are analysed. The model is verified by both ZEMAX simulation and measurement experiments. The repeatability values of the radial and axial motion errors are significantly suppressed by more than 50% after compensation. The repeatability experiments of five degrees of freedom motion errors and the comparison experiments of two degrees of freedom motion errors of an indexing table were performed by our measuring device and a standard instrument. The results show that the repeatability values of the angular positioning error ε z and tilt motion error around the Y axis ε y are 1.2″ and 4.4″, and the comparison deviations of the two motion errors are 4.0″ and 4.4″, respectively. The repeatability values of the radial and axial motion errors, δ y and δ z , are 1.3 and 0.6 µm, respectively. The repeatability value of the tilt motion error around the X axis ε x is 3.8″.
Impact of Measurement Error on Synchrophasor Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include themore » possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.« less
NASA Astrophysics Data System (ADS)
Wang, Yang; Beirle, Steffen; Hendrick, Francois; Hilboll, Andreas; Jin, Junli; Kyuberis, Aleksandra A.; Lampel, Johannes; Li, Ang; Luo, Yuhan; Lodi, Lorenzo; Ma, Jianzhong; Navarro, Monica; Ortega, Ivan; Peters, Enno; Polyansky, Oleg L.; Remmers, Julia; Richter, Andreas; Puentedura, Olga; Van Roozendael, Michel; Seyler, André; Tennyson, Jonathan; Volkamer, Rainer; Xie, Pinhua; Zobov, Nikolai F.; Wagner, Thomas
2017-10-01
In order to promote the development of the passive DOAS technique the Multi Axis DOAS - Comparison campaign for Aerosols and Trace gases (MAD-CAT) was held at the Max Planck Institute for Chemistry in Mainz, Germany, from June to October 2013. Here, we systematically compare the differential slant column densities (dSCDs) of nitrous acid (HONO) derived from measurements of seven different instruments. We also compare the tropospheric difference of SCDs (delta SCD) of HONO, namely the difference of the SCDs for the non-zenith observations and the zenith observation of the same elevation sequence. Different research groups analysed the spectra from their own instruments using their individual fit software. All the fit errors of HONO dSCDs from the instruments with cooled large-size detectors are mostly in the range of 0.1 to 0.3 × 1015 molecules cm-2 for an integration time of 1 min. The fit error for the mini MAX-DOAS is around 0.7 × 1015 molecules cm-2. Although the HONO delta SCDs are normally smaller than 6 × 1015 molecules cm-2, consistent time series of HONO delta SCDs are retrieved from the measurements of different instruments. Both fits with a sequential Fraunhofer reference spectrum (FRS) and a daily noon FRS lead to similar consistency. Apart from the mini-MAX-DOAS, the systematic absolute differences of HONO delta SCDs between the instruments are smaller than 0.63 × 1015 molecules cm-2. The correlation coefficients are higher than 0.7 and the slopes of linear regressions deviate from unity by less than 16 % for the elevation angle of 1°. The correlations decrease with an increase in elevation angle. All the participants also analysed synthetic spectra using the same baseline DOAS settings to evaluate the systematic errors of HONO results from their respective fit programs. In general the errors are smaller than 0.3 × 1015 molecules cm-2, which is about half of the systematic difference between the real measurements.The differences of HONO delta SCDs retrieved in the selected three spectral ranges 335-361, 335-373 and 335-390 nm are considerable (up to 0.57 × 1015 molecules cm-2) for both real measurements and synthetic spectra. We performed sensitivity studies to quantify the dominant systematic error sources and to find a recommended DOAS setting in the three spectral ranges. The results show that water vapour absorption, temperature and wavelength dependence of O4 absorption, temperature dependence of Ring spectrum, and polynomial and intensity offset correction all together dominate the systematic errors. We recommend a fit range of 335-373 nm for HONO retrievals. In such fit range the overall systematic uncertainty is about 0.87 × 1015 molecules cm-2, much smaller than those in the other two ranges. The typical random uncertainty is estimated to be about 0.16 × 1015 molecules cm-2, which is only 25 % of the total systematic uncertainty for most of the instruments in the MAD-CAT campaign. In summary for most of the MAX-DOAS instruments for elevation angle below 5°, half daytime measurements (usually in the morning) of HONO delta SCD can be over the detection limit of 0.2 × 1015 molecules cm-2 with an uncertainty of ˜ 0.9 × 1015 molecules cm-2.
NASA Astrophysics Data System (ADS)
Wen, D. S.; Wen, H.; Shi, Y. G.; Su, B.; Li, Z. C.; Fan, G. Z.
2018-01-01
The B-spline interpolation fitting baseline in electrochemical analysis by differential pulse voltammetry was established for determining the lower concentration 2,6-di-tert-butyl p-cresol(BHT) in Jet Fuel that was less than 5.0 mg/L in the condition of the presence of the 6-tert-butyl-2,4-xylenol.The experimental results has shown that the relative errors are less than 2.22%, the sum of standard deviations less than 0.134mg/L, the correlation coefficient more than 0.9851. If the 2,6-ditert-butyl p-cresol concentration is higher than 5.0mg/L, linear fitting baseline method would be more applicable and simpler.
Benefits of pulmonary rehabilitation in idiopathic pulmonary fibrosis.
Swigris, Jeffrey J; Fairclough, Diane L; Morrison, Marianne; Make, Barry; Kozora, Elizabeth; Brown, Kevin K; Wamboldt, Frederick S
2011-06-01
Information on the benefits of pulmonary rehabilitation (PR) in patients with idiopathic pulmonary fibrosis (IPF) is growing, but PR's effects on certain important outcomes is lacking. We conducted a pilot study of PR in IPF and analyzed changes in functional capacity, fatigue, anxiety, depression, sleep, and health status from baseline to after completion of a standard, 6-week PR program. Six-min walk distance improved a mean ± standard error 202 ± 135 feet (P = .01) from baseline. Fatigue Severity Scale score also improved significantly, declining an average 1.5 ± 0.5 points from baseline. There were trends toward improvement in anxiety, depression, and health status. PR improves functional capacity and fatigue in patients with IPF. (Clinical Trials.gov registration NCT00692796.)
Chen, Hui; Lowe, Alan A; de Almeida, Fernanda Riberiro; Wong, Mary; Fleetham, John A; Wang, Bangkang
2008-09-01
The aim of this study was to test a 3-dimensional (3D) computer-assisted dental model analysis system that uses selected landmarks to describe tooth movement during treatment with an oral appliance. Dental casts of 70 patients diagnosed with obstructive sleep apnea and treated with oral appliances for a mean time of 7 years 4 months were evaluated with a 3D digitizer (MicroScribe-3DX, Immersion, San Jose, Calif) compatible with the Rhinoceros modeling program (version 3.0 SR3c, Robert McNeel & Associates, Seattle, Wash). A total of 86 landmarks on each model were digitized, and 156 variables were calculated as either the linear distance between points or the distance from points to reference planes. Four study models for each patient (maxillary baseline, mandibular baseline, maxillary follow-up, and mandibular follow-up) were superimposed on 2 sets of reference points: 3 points on the palatal rugae for maxillary model superimposition, and 3 occlusal contact points for the same set of maxillary and mandibular model superimpositions. The patients were divided into 3 evaluation groups by 5 orthodontists based on the changes between baseline and follow-up study models. Digital dental measurements could be analyzed, including arch width, arch length, curve of Spee, overbite, overjet, and the anteroposterior relationship between the maxillary and mandibular arches. A method error within 0.23 mm in 14 selected variables was found for the 3D system. The statistical differences in the 3 evaluation groups verified the division criteria determined by the orthodontists. The system provides a method to record 3D measurements of study models that permits computer visualization of tooth position and movement from various perspectives.
NASA Astrophysics Data System (ADS)
Hovatta, Talvikki; Lister, Matthew L.; Aller, Margo F.; Aller, Hugh D.; Homan, Daniel C.; Kovalev, Yuri Y.; Pushkarev, Alexander B.; Savolainen, Tuomas
2012-10-01
We report observations of Faraday rotation measures for a sample of 191 extragalactic radio jets observed within the MOJAVE program. Multifrequency Very Long Baseline Array observations were carried out over 12 epochs in 2006 at four frequencies between 8 and 15 GHz. We detect parsec-scale Faraday rotation measures in 149 sources and find the quasars to have larger rotation measures on average than BL Lac objects. The median core rotation measures are significantly higher than in the jet components. This is especially true for quasars where we detect a significant negative correlation between the magnitude of the rotation measure and the de-projected distance from the core. We perform detailed simulations of the observational errors of total intensity, polarization, and Faraday rotation, and concentrate on the errors of transverse Faraday rotation measure gradients in unresolved jets. Our simulations show that the finite image restoring beam size has a significant effect on the observed rotation measure gradients, and spurious gradients can occur due to noise in the data if the jet is less than two beams wide in polarization. We detect significant transverse rotation measure gradients in four sources (0923+392, 1226+023, 2230+114, and 2251+158). In 1226+023 the rotation measure is for the first time seen to change sign from positive to negative over the transverse cuts, which supports the presence of a helical magnetic field in the jet. In this source we also detect variations in the jet rotation measure over a timescale of three months, which are difficult to explain with external Faraday screens and suggest internal Faraday rotation. By comparing fractional polarization changes in jet components between the four frequency bands to depolarization models, we find that an external purely random Faraday screen viewed through only a few lines of sight can explain most of our polarization observations, but in some sources, such as 1226+023 and 2251+158, internal Faraday rotation is needed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
N, Gwilliam M; J, Collins D; O, Leach M
Purpose: To assess the feasibility of accurately quantifying the concentration of MRI contrast agent (CA) in pulsatile flowing blood by measuring its T{sub 1}, as is common for the purposes of obtaining a patientspecific arterial input function (AIF). Dynamic contrast enhanced (DCE) - MRI and pharmacokinetic (PK) modelling is widely used to produce measures of vascular function but accurate measurement of the AIF undermines their accuracy. A proposed solution is to measure the T{sub 1} of blood in a large vessel using the Fram double flip angle method during the passage of a bolus of CA. This work expands onmore » previous work by assessing pulsatile flow and the changes in T{sub 1} seen with a CA bolus. Methods: A phantom was developed which used a physiological pump to pass fluid of a known T{sub 1} (812ms) through the centre of a head coil of a clinical 1.5T MRI scanner. Measurements were made using high temporal resolution sequences suitable for DCE-MRI and were used to validate a virtual phantom that simulated the expected errors due to pulsatile flow and bolus of CA concentration changes typically found in patients. Results: : Measured and virtual results showed similar trends, although there were differences that may be attributed to the virtual phantom not accurately simulating the spin history of the fluid before entering the imaging volume. The relationship between T{sub 1} measurement and flow speed was non-linear. T{sub 1} measurement is compromised by new spins flowing into the imaging volume, not being subject to enough excitations to have reached steady-state. The virtual phantom demonstrated a range of recorded T{sub 1} for various simulated T{sub 1} / flow rates. Conclusion: T{sub 1} measurement of flowing blood using standard DCE-MRI sequences is very challenging. Measurement error is non-linear with relation to instantaneous flow speed. Optimising sequence parameters and lowering baseline T{sub 1} of blood should be considered.« less
Significant and Sustained Reduction in Chemotherapy Errors Through Improvement Science.
Weiss, Brian D; Scott, Melissa; Demmel, Kathleen; Kotagal, Uma R; Perentesis, John P; Walsh, Kathleen E
2017-04-01
A majority of children with cancer are now cured with highly complex chemotherapy regimens incorporating multiple drugs and demanding monitoring schedules. The risk for error is high, and errors can occur at any stage in the process, from order generation to pharmacy formulation to bedside drug administration. Our objective was to describe a program to eliminate errors in chemotherapy use among children. To increase reporting of chemotherapy errors, we supplemented the hospital reporting system with a new chemotherapy near-miss reporting system. After the model for improvement, we then implemented several interventions, including a daily chemotherapy huddle, improvements to the preparation and delivery of intravenous therapy, headphones for clinicians ordering chemotherapy, and standards for chemotherapy administration throughout the hospital. Twenty-two months into the project, we saw a centerline shift in our U chart of chemotherapy errors that reached the patient from a baseline rate of 3.8 to 1.9 per 1,000 doses. This shift has been sustained for > 4 years. In Poisson regression analyses, we found an initial increase in error rates, followed by a significant decline in errors after 16 months of improvement work ( P < .001). After the model for improvement, our improvement efforts were associated with significant reductions in chemotherapy errors that reached the patient. Key drivers for our success included error vigilance through a huddle, standardization, and minimization of interruptions during ordering.
Common Scientific and Statistical Errors in Obesity Research
George, Brandon J.; Beasley, T. Mark; Brown, Andrew W.; Dawson, John; Dimova, Rositsa; Divers, Jasmin; Goldsby, TaShauna U.; Heo, Moonseong; Kaiser, Kathryn A.; Keith, Scott; Kim, Mimi Y.; Li, Peng; Mehta, Tapan; Oakes, J. Michael; Skinner, Asheley; Stuart, Elizabeth; Allison, David B.
2015-01-01
We identify 10 common errors and problems in the statistical analysis, design, interpretation, and reporting of obesity research and discuss how they can be avoided. The 10 topics are: 1) misinterpretation of statistical significance, 2) inappropriate testing against baseline values, 3) excessive and undisclosed multiple testing and “p-value hacking,” 4) mishandling of clustering in cluster randomized trials, 5) misconceptions about nonparametric tests, 6) mishandling of missing data, 7) miscalculation of effect sizes, 8) ignoring regression to the mean, 9) ignoring confirmation bias, and 10) insufficient statistical reporting. We hope that discussion of these errors can improve the quality of obesity research by helping researchers to implement proper statistical practice and to know when to seek the help of a statistician. PMID:27028280
Changes in muscle directional tuning parallel feedforward adaptation to a visuomotor rotation.
de Rugy, Aymar; Carroll, Timothy J
2010-06-01
When people learn to reach in a novel sensorimotor environment, there are changes in the muscle activity required to achieve task goals. Here, we assessed the time course of changes in muscle directional tuning during acquisition of a new mapping between visual information and isometric force production in the absence of feedback-based error corrections. We also measured the influence of visuomotor adaptation on corticospinal excitability, to test whether any changes in muscle directional tuning are associated with adaptations in the final output components of the sensorimotor control system. Nine right-handed subjects performed a ballistic, center-out isometric target acquisition task with the right wrist (16 targets spaced every 22.5 degrees in the joint space). Surface electromyography was recorded from four major wrist muscles, and motor evoked potentials induced by transcranial magnetic stimulation were measured at baseline, after task execution in the absence of the rotation (A1), after adaptation to the rotation (B), and after a final block of trials without rotation (A2). Changes in the directional tuning of muscles closely matched the rotation of the directional error in force, indicating that the functional contribution of muscles remained consistent over the adaptation period. In contrast to previous motor learning studies, we found only minor changes in the amount of muscular activity and no increase in corticospinal excitability. These results suggest that increased muscle co-activation occurs only when the dynamics of the limb are perturbed and/or that online error corrections or altered force requirements are necessary to elicit a component of the adaptation in the final steps of the transformation between motor goal and muscle activation.
Bertens, Dirk; Kessels, Roy P C; Fiorenzato, Eleonora; Boelen, Danielle H E; Fasotti, Luciano
2015-09-01
Both errorless learning (EL) and Goal Management Training (GMT) have been shown effective cognitive rehabilitation methods aimed at optimizing the performance on everyday skills after brain injury. We examine whether a combination of EL and GMT is superior to traditional GMT for training complex daily tasks in brain-injured patients with executive dysfunction. This was an assessor-blinded randomized controlled trial conducted in 67 patients with executive impairments due to brain injury of non-progressive nature (minimal post-onset time: 3 months), referred for outpatient rehabilitation. Individually selected everyday tasks were trained using 8 sessions of an experimental combination of EL and GMT or via conventional GMT, which follows a trial-and-error approach. Primary outcome measure was everyday task performance assessed after treatment compared to baseline. Goal attainment scaling, rated by both trainers and patients, was used as secondary outcome measure. EL-GMT improved everyday task performance significantly more than conventional GMT (adjusted difference 15.43, 95% confidence interval [CI] [4.52, 26.35]; Cohen's d=0.74). Goal attainment, as scored by the trainers, was significantly higher after EL-GMT compared to conventional GMT (mean difference 7.34, 95% CI [2.99, 11.68]; Cohen's d=0.87). The patients' goal attainment scores did not differ between the two treatment arms (mean difference 3.51, 95% CI [-1.41, 8.44]). Our study is the first to show that preventing the occurrence of errors during executive strategy training enhances the acquisition of everyday activities. A combined EL-GMT intervention is a valuable contribution to cognitive rehabilitation in clinical practice.
Alghadir, Ahmad H; Anwer, Shahnawaz; Iqbal, Amir; Iqbal, Zaheen Ahmed
2018-01-01
Objective Several scales are commonly used for assessing pain intensity. Among them, the numerical rating scale (NRS), visual analog scale (VAS), and verbal rating scale (VRS) are often used in clinical practice. However, no study has performed psychometric analyses of their reliability and validity in the measurement of osteoarthritic (OA) pain. Therefore, the present study examined the test–retest reliability, validity, and minimum detectable change (MDC) of the VAS, NRS, and VRS for the measurement of OA knee pain. In addition, the correlations of VAS, NRS, and VRS with demographic variables were evaluated. Methods The study included 121 subjects (65 women, 56 men; aged 40–80 years) with OA of the knee. Test–retest reliability of the VAS, NRS, and VRS was assessed during two consecutive visits in a 24 h interval. The validity was tested using Pearson’s correlation coefficients between the baseline scores of VAS, NRS, and VRS and the demographic variables (age, body mass index [BMI], sex, and OA grade). The standard error of measurement (SEM) and the MDC were calculated to assess statistically meaningful changes. Results The intraclass correlation coefficients of the VAS, NRS, and VRS were 0.97, 0.95, and 0.93, respectively. VAS, NRS, and VRS were significantly related to demographic variables (age, BMI, sex, and OA grade). The SEM of VAS, NRS, and VRS was 0.03, 0.48, and 0.21, respectively. The MDC of VAS, NRS, and VRS was 0.08, 1.33, and 0.58, respectively. Conclusion All the three scales had excellent test–retest reliability. However, the VAS was the most reliable, with the smallest errors in the measurement of OA knee pain. PMID:29731662
Ohno, Shotaro; Takahashi, Kana; Inoue, Aimi; Takada, Koki; Ishihara, Yoshiaki; Tanigawa, Masaru; Hirao, Kazuki
2017-12-01
This study aims to examine the smallest detectable change (SDC) and test-retest reliability of the Center for Epidemiologic Studies Depression Scale (CES-D), General Self-Efficacy Scale (GSES), and 12-item General Health Questionnaire (GHQ-12). We tested 154 young adults at baseline and 2 weeks later. We calculated the intra-class correlation coefficients (ICCs) for test-retest reliability with a two-way random effects model for agreement. We then calculated the standard error of measurement (SEM) for agreement using the ICC formula. The SEM for agreement was used to calculate SDC values at the individual level (SDC ind ) and group level (SDC group ). The study participants included 137 young adults. The ICCs for all self-reported outcome measurement scales exceeded 0.70. The SEM of CES-D was 3.64, leading to an SDC ind of 10.10 points and SDC group of 0.86 points. The SEM of GSES was 1.56, leading to an SDC ind of 4.33 points and SDC group of 0.37 points. The SEM of GHQ-12 with bimodal scoring was 1.47, leading to an SDC ind of 4.06 points and SDC group of 0.35 points. The SEM of GHQ-12 with Likert scoring was 2.44, leading to an SDC ind of 6.76 points and SDC group of 0.58 points. To confirm that the change was not a result of measurement error, a score of self-reported outcome measurement scales would need to change by an amount greater than these SDC values. This has important implications for clinicians and epidemiologists when assessing outcomes. © 2017 John Wiley & Sons, Ltd.
Real-Time Minimization of Tracking Error for Aircraft Systems
NASA Technical Reports Server (NTRS)
Garud, Sumedha; Kaneshige, John T.; Krishnakumar, Kalmanje S.; Kulkarni, Nilesh V.; Burken, John
2013-01-01
This technology presents a novel, stable, discrete-time adaptive law for flight control in a Direct adaptive control (DAC) framework. Where errors are not present, the original control design has been tuned for optimal performance. Adaptive control works towards achieving nominal performance whenever the design has modeling uncertainties/errors or when the vehicle suffers substantial flight configuration change. The baseline controller uses dynamic inversion with proportional-integral augmentation. On-line adaptation of this control law is achieved by providing a parameterized augmentation signal to a dynamic inversion block. The parameters of this augmentation signal are updated to achieve the nominal desired error dynamics. If the system senses that at least one aircraft component is experiencing an excursion and the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, then the neural network (NN) modeling of aircraft operation may be changed.
NASA Technical Reports Server (NTRS)
Bolten, John D.; Mladenova, Iliana E.; Crow, Wade; De Jeu, Richard
2016-01-01
A primary operational goal of the United States Department of Agriculture (USDA) is to improve foreign market access for U.S. agricultural products. A large fraction of this crop condition assessment is based on satellite imagery and ground data analysis. The baseline soil moisture estimates that are currently used for this analysis are based on output from the modified Palmer two-layer soil moisture model, updated to assimilate near-real time observations derived from the Soil Moisture Ocean Salinity (SMOS) satellite. The current data assimilation system is based on a 1-D Ensemble Kalman Filter approach, where the observation error is modeled as a function of vegetation density. This allows for offsetting errors in the soil moisture retrievals. The observation error is currently adjusted using Normalized Difference Vegetation Index (NDVI) climatology. In this paper we explore the possibility of utilizing microwave-based vegetation optical depth instead.
What errors do peer reviewers detect, and does training improve their ability to detect them?
Schroter, Sara; Black, Nick; Evans, Stephen; Godlee, Fiona; Osorio, Lyda; Smith, Richard
2008-10-01
To analyse data from a trial and report the frequencies with which major and minor errors are detected at a general medical journal, the types of errors missed and the impact of training on error detection. 607 peer reviewers at the BMJ were randomized to two intervention groups receiving different types of training (face-to-face training or a self-taught package) and a control group. Each reviewer was sent the same three test papers over the study period, each of which had nine major and five minor methodological errors inserted. BMJ peer reviewers. The quality of review, assessed using a validated instrument, and the number and type of errors detected before and after training. The number of major errors detected varied over the three papers. The interventions had small effects. At baseline (Paper 1) reviewers found an average of 2.58 of the nine major errors, with no notable difference between the groups. The mean number of errors reported was similar for the second and third papers, 2.71 and 3.0, respectively. Biased randomization was the error detected most frequently in all three papers, with over 60% of reviewers rejecting the papers identifying this error. Reviewers who did not reject the papers found fewer errors and the proportion finding biased randomization was less than 40% for each paper. Editors should not assume that reviewers will detect most major errors, particularly those concerned with the context of study. Short training packages have only a slight impact on improving error detection.
An Empirical State Error Covariance Matrix Orbit Determination Example
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.
Error Analysis and Validation for Insar Height Measurement Induced by Slant Range
NASA Astrophysics Data System (ADS)
Zhang, X.; Li, T.; Fan, W.; Geng, X.
2018-04-01
InSAR technique is an important method for large area DEM extraction. Several factors have significant influence on the accuracy of height measurement. In this research, the effect of slant range measurement for InSAR height measurement was analysis and discussed. Based on the theory of InSAR height measurement, the error propagation model was derived assuming no coupling among different factors, which directly characterise the relationship between slant range error and height measurement error. Then the theoretical-based analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of slant range error to height measurement. In addition, the simulation validation of InSAR error model induced by slant range was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were further discussed and evaluated.
Error disclosure: a new domain for safety culture assessment.
Etchegaray, Jason M; Gallagher, Thomas H; Bell, Sigall K; Dunlap, Ben; Thomas, Eric J
2012-07-01
To (1) develop and test survey items that measure error disclosure culture, (2) examine relationships among error disclosure culture, teamwork culture and safety culture and (3) establish predictive validity for survey items measuring error disclosure culture. All clinical faculty from six health institutions (four medical schools, one cancer centre and one health science centre) in The University of Texas System were invited to anonymously complete an electronic survey containing questions about safety culture and error disclosure. The authors found two factors to measure error disclosure culture: one factor is focused on the general culture of error disclosure and the second factor is focused on trust. Both error disclosure culture factors were unique from safety culture and teamwork culture (correlations were less than r=0.85). Also, error disclosure general culture and error disclosure trust culture predicted intent to disclose a hypothetical error to a patient (r=0.25, p<0.001 and r=0.16, p<0.001, respectively) while teamwork and safety culture did not predict such an intent (r=0.09, p=NS and r=0.12, p=NS). Those who received prior error disclosure training reported significantly higher levels of error disclosure general culture (t=3.7, p<0.05) and error disclosure trust culture (t=2.9, p<0.05). The authors created and validated a new measure of error disclosure culture that predicts intent to disclose an error better than other measures of healthcare culture. This measure fills an existing gap in organisational assessments by assessing transparent communication after medical error, an important aspect of culture.
Time-dependent gravity in Southern California, May 1974 to April 1979
NASA Technical Reports Server (NTRS)
Whitcomb, J. H.; Franzen, W. O.; Given, J. W.; Pechmann, J. C.; Ruff, L. J.
1980-01-01
The Southern California gravity survey, begun in May 1974 to obtain high spatial and temporal density gravity measurements to be coordinated with long-baseline three dimensional geodetic measurements of the Astronomical Radio Interferometric Earth Surveying project, is presented. Gravity data was obtained from 28 stations located in and near the seismically active San Gabriel section of the Southern California Transverse Ranges and adjoining San Andreas Fault at intervals of one to two months using gravity meters relative to a base station standard meter. A single-reading standard deviation of 11 microGal is obtained which leads to a relative deviation of 16 microGal between stations, with data averaging reducing the standard error to 2 to 3 microGal. The largest gravity variations observed are found to correlate with nearby well water variations and smoothed rainfall levels, indicating the importance of ground water variations to gravity measurements. The largest earthquake to occur during the survey, which extended to April, 1979, is found to be accompanied in the station closest to the earthquake by the largest measured gravity changes that cannot be related to factors other than tectonic distortion.
A toolkit for measurement error correction, with a focus on nutritional epidemiology
Keogh, Ruth H; White, Ian R
2014-01-01
Exposure measurement error is a problem in many epidemiological studies, including those using biomarkers and measures of dietary intake. Measurement error typically results in biased estimates of exposure-disease associations, the severity and nature of the bias depending on the form of the error. To correct for the effects of measurement error, information additional to the main study data is required. Ideally, this is a validation sample in which the true exposure is observed. However, in many situations, it is not feasible to observe the true exposure, but there may be available one or more repeated exposure measurements, for example, blood pressure or dietary intake recorded at two time points. The aim of this paper is to provide a toolkit for measurement error correction using repeated measurements. We bring together methods covering classical measurement error and several departures from classical error: systematic, heteroscedastic and differential error. The correction methods considered are regression calibration, which is already widely used in the classical error setting, and moment reconstruction and multiple imputation, which are newer approaches with the ability to handle differential error. We emphasize practical application of the methods in nutritional epidemiology and other fields. We primarily consider continuous exposures in the exposure-outcome model, but we also outline methods for use when continuous exposures are categorized. The methods are illustrated using the data from a study of the association between fibre intake and colorectal cancer, where fibre intake is measured using a diet diary and repeated measures are available for a subset. © 2014 The Authors. PMID:24497385
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei
2018-01-01
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model’s performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM’s parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models’ performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors. PMID:29342942
Refractive errors in children and adolescents in Bucaramanga (Colombia).
Galvis, Virgilio; Tello, Alejandro; Otero, Johanna; Serrano, Andrés A; Gómez, Luz María; Castellanos, Yuly
2017-01-01
The aim of this study was to establish the frequency of refractive errors in children and adolescents aged between 8 and 17 years old, living in the metropolitan area of Bucaramanga (Colombia). This study was a secondary analysis of two descriptive cross-sectional studies that applied sociodemographic surveys and assessed visual acuity and refraction. Ametropias were classified as myopic errors, hyperopic errors, and mixed astigmatism. Eyes were considered emmetropic if none of these classifications were made. The data were collated using free software and analyzed with STATA/IC 11.2. One thousand two hundred twenty-eight individuals were included in this study. Girls showed a higher rate of ametropia than boys. Hyperopic refractive errors were present in 23.1% of the subjects, and myopic errors in 11.2%. Only 0.2% of the eyes had high myopia (≤-6.00 D). Mixed astigmatism and anisometropia were uncommon, and myopia frequency increased with age. There were statistically significant steeper keratometric readings in myopic compared to hyperopic eyes. The frequency of refractive errors that we found of 36.7% is moderate compared to the global data. The rates and parameters statistically differed by sex and age groups. Our findings are useful for establishing refractive error rate benchmarks in low-middle-income countries and as a baseline for following their variation by sociodemographic factors.
Errors in laboratory medicine: practical lessons to improve patient safety.
Howanitz, Peter J
2005-10-01
Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification, specimen acceptability, proficiency testing, critical value reporting, blood product wastage, and blood culture contamination. Error rate benchmarks for these performance measures were cited and recommendations for improving patient safety presented. Not only has each of the 8 performance measures proven practical, useful, and important for patient care, taken together, they also fulfill regulatory requirements. All laboratories should consider implementing these performance measures and standardizing their own scientific designs, data analysis, and error reduction strategies according to findings from these published studies.
NASA Astrophysics Data System (ADS)
Penn, C. A.; Clow, D. W.; Sexstone, G. A.
2017-12-01
Water supply forecasts are an important tool for water resource managers in areas where surface water is relied on for irrigating agricultural lands and for municipal water supplies. Forecast errors, which correspond to inaccurate predictions of total surface water volume, can lead to mis-allocated water and productivity loss, thus costing stakeholders millions of dollars. The objective of this investigation is to provide water resource managers with an improved understanding of factors contributing to forecast error, and to help increase the accuracy of future forecasts. In many watersheds of the western United States, snowmelt contributes 50-75% of annual surface water flow and controls both the timing and volume of peak flow. Water supply forecasts from the Natural Resources Conservation Service (NRCS), National Weather Service, and similar cooperators use precipitation and snowpack measurements to provide water resource managers with an estimate of seasonal runoff volume. The accuracy of these forecasts can be limited by available snowpack and meteorological data. In the headwaters of the Rio Grande, NRCS produces January through June monthly Water Supply Outlook Reports. This study evaluates the accuracy of these forecasts since 1990, and examines what factors may contribute to forecast error. The Rio Grande headwaters has experienced recent changes in land cover from bark beetle infestation and a large wildfire, which can affect hydrological processes within the watershed. To investigate trends and possible contributing factors in forecast error, a semi-distributed hydrological model was calibrated and run to simulate daily streamflow for the period 1990-2015. Annual and seasonal watershed and sub-watershed water balance properties were compared with seasonal water supply forecasts. Gridded meteorological datasets were used to assess changes in the timing and volume of spring precipitation events that may contribute to forecast error. Additionally, a spatially-distributed physics-based snow model was used to assess possible effects of land cover change on snowpack properties. Trends in forecasted error are variable while baseline model results show a consistent under-prediction in the recent decade, highlighting possible compounding effects of climate and land cover changes.
NASA Astrophysics Data System (ADS)
Shi, Zhaoyao; Song, Huixu; Chen, Hongfang; Sun, Yanqiang
2018-02-01
This paper presents a novel experimental approach for confirming that spherical mirror of a laser tracking system can reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy. By simplifying the optical system model of laser tracking system based on spherical mirror, we can easily extract the laser ranging measurement error caused by rotation errors of gimbal mount axes with the positions of spherical mirror, biconvex lens, cat's eye reflector, and measuring beam. The motions of polarization beam splitter and biconvex lens along the optical axis and vertical direction of optical axis are driven by error motions of gimbal mount axes. In order to simplify the experimental process, the motion of biconvex lens is substituted by the motion of spherical mirror according to the principle of relative motion. The laser ranging measurement error caused by the rotation errors of gimbal mount axes could be recorded in the readings of laser interferometer. The experimental results showed that the laser ranging measurement error caused by rotation errors was less than 0.1 μm if radial error motion and axial error motion were within ±10 μm. The experimental method simplified the experimental procedure and the spherical mirror could reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy of the laser tracking system.
Preliminary GOES-R ABI navigation and registration assessment results
NASA Astrophysics Data System (ADS)
Tan, B.; Dellomo, J.; Wolfe, R. E.; Reth, A. D.
2017-12-01
The US Geostationary Operational Environmental Satellite - R Series (GOES-R) was launched on November 19, 2016, and was designated GOESR-16 upon reaching geostationary orbit ten days later. The Advanced Baseline Imager (ABI) is the primary instrument on the GOES-R series for imaging Earth's surface and atmosphere to aid in weather prediction and climate monitoring. We developed algorithms and software for independent verification of the ABI Image Navigation and Registration (INR). Since late January 2017, four INR metrics have been continuously generated to monitor the ABI INR performance: navigation (NAV) error, channel-to-channel registration (CCR) error, frame-to-frame registration (FFR) error, and within-frame registration (WIFR) error. In this paper, we will describe the fundamental algorithm used for the image registration and briefly discuss the processing flow of INR Performance Assessment Tool Set (IPATS) developed for ABI INR. The assessment of the accuracy shows that IPATS measurements error is about 1/20 of the size of a pixel. Then the GOES-16 NAV assessments results, the primary metric, from January to August 2017, will be presented. The INR has improved over time as post-launch tests were performed and corrections were applied. The mean NAV error of the visible and near infrared (VNIR) channels dropped from 20 μrad in January to around 5 μrad (+/-4 μrad, 1 σ) in June, while the mean NAV error of long wave infrared (LWIR) channels dropped from around 70 μrad in January to around 5 μrad (+/-15 μrad, 1 σ) in June. A full global ABI image is composed with 22 east-west direction swaths. The swath-wise NAV error analysis shows that there was some variation in the mean swath-wise NAV errors. The variations are about as much as 20% of the scene NAV mean errors. As expected, the swaths over the tropical area have far fewer valid assessments (matchups) than those in mid-latitude region due to cloud coverage. It was also found that there was a rotation (clocking) of the focal plane of LWIR that was seen in both the NAV and CCR results. The rotation was corrected by an INR update in June 2017. Through deep-dive examinations of the scenes with large mean and/or variation in INR errors, we validated that IPATS is an excellent tool for assessing and improving the GOES-16 ABI INR and is also useful in INR long-term monitoring.
Hoffmann, B; Müller, V; Rochon, J; Gondan, M; Müller, B; Albay, Z; Weppler, K; Leifermann, M; Mießner, C; Güthlin, C; Parker, D; Hofinger, G; Gerlach, F M
2014-01-01
The measurement of safety culture in healthcare is generally regarded as a first step towards improvement. Based on a self-assessment of safety culture, the Frankfurt Patient Safety Matrix (FraTrix) aims to enable healthcare teams to improve safety culture in their organisations. In this study we assessed the effects of FraTrix on safety culture in general practice. We conducted an open randomised controlled trial in 60 general practices. FraTrix was applied over a period of 9 months during three facilitated team sessions in intervention practices. At baseline and after 12 months, scores were allocated for safety culture as expressed in practice structure and processes (indicators), in safety climate and in patient safety incident reporting. The primary outcome was the indicator error management. During the team sessions, practice teams reflected on their safety culture and decided on about 10 actions per practice to improve it. After 12 months, no significant differences were found between intervention and control groups in terms of error management (competing probability=0.48, 95% CI 0.34 to 0.63, p=0.823), 11 further patient safety culture indicators and safety climate scales. Intervention practices showed better reporting of patient safety incidents, reflected in a higher number of incident reports (mean (SD) 4.85 (4.94) vs 3.10 (5.42), p=0.045) and incident reports of higher quality (scoring 2.27 (1.93) vs 1.49 (1.67), p=0.038) than control practices. Applied as a team-based instrument to assess safety culture, FraTrix did not lead to measurable improvements in error management. Comparable studies with more positive results had less robust study designs. In future research, validated combined methods to measure safety culture will be required. In addition, more attention should be paid to evaluation of process parameters. Implemented actions and incident reporting may be more appropriate target endpoints. German Clinical Trials Register (Deutsches Register Klinischer Studien, DRKS) No. DRKS00000145.
NASA Technical Reports Server (NTRS)
Won, Mark J.
1990-01-01
Wind tunnel tests of propulsion-integrated aircraft models have identified inlet flow distortion as a major source of compressor airflow measurement error in turbine-powered propulsion simulators. Consequently, two Compact Multimission Aircraft Propulsion Simulator (CMAPS) units were statically tested at sea level ambient conditions to establish simulator operating performance characteristics and to calibrate the compressor airflow against an accurate bellmouth flowmeter in the presence of inlet flow distortions. The distortions were generated using various-shaped wire mesh screens placed upstream of the compressor. CMAPS operating maps and performance envelopes were obtained for inlet total pressure distortions (ratio of the difference between the maximum and minimum total pressures to the average total pressure) up to 35 percent, and were compared to baseline simulator operating characteristics for a uniform inlet. Deviations from CMAPS baseline performance were attributed to the coupled variation of both compressor inlet-flow distortion and Reynolds number index throughout the simulator operating envelope for each screen configuration. Four independent methods were used to determine CMAPS compressor airflow; direct compressor inlet and discharge measurements, an entering/exiting flow-balance relationships, and a correlation between the mixer pressure and the corrected compressor airflow. Of the four methods, the last yielded the least scatter in the compressor flow coefficient, approximately + or - 3 percent over the range of flow distortions.
Poststroke Fatigue: Who Is at Risk for an Increase in Fatigue?
van Eijsden, Hanna Maria; van de Port, Ingrid Gerrie Lambert; Visser-Meily, Johanna Maria August; Kwakkel, Gert
2012-01-01
Background. Several studies have examined determinants related to post-stroke fatigue. However, it is unclear which determinants can predict an increase in poststroke fatigue over time. Aim. This prospective cohort study aimed to identify determinants which predict an increase in post-stroke fatigue. Methods. A total of 250 patients with stroke were examined at inpatient rehabilitation discharge (T0) and 24 weeks later (T1). Fatigue was measured using the Fatigue Severity Scale (FSS). An increase in post-stroke fatigue was defined as an increase in the FSS score beyond the 95% limits of the standard error of measurement of the FSS (i.e., 1.41 points) between T0 and T1. Candidate determinants included personal factors, stroke characteristics, physical, cognitive, and emotional functions, and activities and participation and were assessed at T0. Factors predicting an increase in fatigue were identified using forward multivariate logistic regression analysis. Results. The only independent predictor of an increase in post-stroke fatigue was FSS (OR 0.50; 0.38–0.64, P < 0.001). The model including FSS at baseline correctly predicted 7.9% of the patients who showed increased fatigue at T1. Conclusion. The prognostic model to predict an increase in fatigue after stroke has limited predictive value, but baseline fatigue is the most important independent predictor. Overall, fatigue levels remained stable over time. PMID:22028989
Schmidt, Frank L; Le, Huy; Ilies, Remus
2003-06-01
On the basis of an empirical study of measures of constructs from the cognitive domain, the personality domain, and the domain of affective traits, the authors of this study examine the implications of transient measurement error for the measurement of frequently studied individual differences variables. The authors clarify relevant reliability concepts as they relate to transient error and present a procedure for estimating the coefficient of equivalence and stability (L. J. Cronbach, 1947), the only classical reliability coefficient that assesses all 3 major sources of measurement error (random response, transient, and specific factor errors). The authors conclude that transient error exists in all 3 trait domains and is especially large in the domain of affective traits. Their findings indicate that the nearly universal use of the coefficient of equivalence (Cronbach's alpha; L. J. Cronbach, 1951), which fails to assess transient error, leads to overestimates of reliability and undercorrections for biases due to measurement error.
NASA Astrophysics Data System (ADS)
Gehlot, B. K.; Koopmans, L. V. E.; de Bruyn, A. G.; Zaroubi, S.; Brentjens, M. A.; Asad, K. M. B.; Hatef, M.; Jelić, V.; Mevius, M.; Offringa, A. R.; Pandey, V. N.; Yatawatta, S.
2018-05-01
Contamination due to foregrounds (Galactic and Extra-galactic), calibration errors and ionospheric effects pose major challenges in detection of the cosmic 21 cm signal in various Epoch of Reionization (EoR) experiments. We present the results of a pilot study of a field centered on 3C196 using LOFAR Low Band (56-70 MHz) observations, where we quantify various wide field and calibration effects such as gain errors, polarized foregrounds, and ionospheric effects. We observe a `pitchfork' structure in the 2D power spectrum of the polarized intensity in delay-baseline space, which leaks into the modes beyond the instrumental horizon (EoR/CD window). We show that this structure largely arises due to strong instrumental polarization leakage (˜30%) towards Cas A (˜21 kJy at 81 MHz, brightest source in northern sky), which is far away from primary field of view. We measure an extremely small ionospheric diffractive scale (rdiff ≈ 430 m at 60 MHz) towards Cas A resembling pure Kolmogorov turbulence compared to rdiff ˜ 3 - 20 km towards zenith at 150 MHz for typical ionospheric conditions. This is one of the smallest diffractive scales ever measured at these frequencies. Our work provides insights in understanding the nature of aforementioned effects and mitigating them in future Cosmic Dawn observations (e.g. with SKA-low and HERA) in the same frequency window.
The effect of multifocal soft contact lenses on peripheral refraction.
Kang, Pauline; Fan, Yvonne; Oh, Kelly; Trac, Kevin; Zhang, Frank; Swarbrick, Helen A
2013-07-01
To compare changes in peripheral refraction with single-vision (SV) and multifocal (MF) correction of distance central refraction with commercially available SV and MF soft contact lenses (SCLs) in young myopic adults. Thirty-four myopic adult subjects were fitted with Proclear Sphere and Proclear Multifocal SCLs to correct their manifest central refractive error. Central and peripheral refraction were measured with no lens wear and subsequently with the two different types of SCL correction. At baseline, refraction was myopic at all locations along the horizontal meridian. Peripheral refraction was relatively hyperopic compared with center at 30 and 35 degrees in the temporal visual field (VF) in low myopes, and at 30 and 35 degrees in the temporal VF, and 10, 30, and 35 degrees in the nasal VF in moderate myopes. Single-vision and MF distance correction with Proclear Sphere and Proclear Multifocal SCLs, respectively, caused a hyperopic shift in refraction at all locations in the horizontal VF. Compared with SV correction, MF SCL correction caused a significant relative myopic shift at all locations in the nasal VF in both low and moderate myopes and also at 35 degrees in the temporal VF in moderate myopes. Correction of central refractive error with SV and MF SCLs caused a hyperopic shift in both central and peripheral refraction at all positions in the horizontal meridian. Single-vision SCL correction caused the peripheral retina, which initially experienced absolute myopic defocus at baseline with no correction to experience an absolute hyperopic defocus. Multifocal SCL correction resulted in a relative myopic shift in peripheral refraction compared with SV SCL correction. This myopic shift may explain recent reports of reduced myopia progression rates with MF SCL correction.
Evaluation of Acoustic Doppler Current Profiler measurements of river discharge
Morlock, S.E.
1996-01-01
The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.
NASA Astrophysics Data System (ADS)
Lee, Minho; Cho, Nahm-Gyoo
2013-09-01
A new probing and compensation method is proposed to improve the three-dimensional (3D) measuring accuracy of 3D shapes, including irregular surfaces. A new tactile coordinate measuring machine (CMM) probe with a five-degree of freedom (5-DOF) force/moment sensor using carbon fiber plates was developed. The proposed method efficiently removes the anisotropic sensitivity error and decreases the stylus deformation and the actual contact point estimation errors that are major error components of shape measurement using touch probes. The relationship between the measuring force and estimation accuracy of the actual contact point error and stylus deformation error are examined for practical use of the proposed method. The appropriate measuring force condition is presented for the precision measurement.
A long baseline global stereo matching based upon short baseline estimation
NASA Astrophysics Data System (ADS)
Li, Jing; Zhao, Hong; Li, Zigang; Gu, Feifei; Zhao, Zixin; Ma, Yueyang; Fang, Meiqi
2018-05-01
In global stereo vision, balancing the matching efficiency and computing accuracy seems to be impossible because they contradict each other. In the case of a long baseline, this contradiction becomes more prominent. In order to solve this difficult problem, this paper proposes a novel idea to improve both the efficiency and accuracy in global stereo matching for a long baseline. In this way, the reference images located between the long baseline image pairs are firstly chosen to form the new image pairs with short baselines. The relationship between the disparities of pixels in the image pairs with different baselines is revealed by considering the quantized error so that the disparity search range under the long baseline can be reduced by guidance of the short baseline to gain matching efficiency. Then, the novel idea is integrated into the graph cuts (GCs) to form a multi-step GC algorithm based on the short baseline estimation, by which the disparity map under the long baseline can be calculated iteratively on the basis of the previous matching. Furthermore, the image information from the pixels that are non-occluded under the short baseline but are occluded for the long baseline can be employed to improve the matching accuracy. Although the time complexity of the proposed method depends on the locations of the chosen reference images, it is usually much lower for a long baseline stereo matching than when using the traditional GC algorithm. Finally, the validity of the proposed method is examined by experiments based on benchmark datasets. The results show that the proposed method is superior to the traditional GC method in terms of efficiency and accuracy, and thus it is suitable for long baseline stereo matching.
Visual outcomes after spectacles treatment in children with bilateral high refractive amblyopia.
Lin, Pei-Wen; Chang, Hsueh-Wen; Lai, Ing-Chou; Teng, Mei-Ching
2016-11-01
The aim was to investigate the visual outcomes of treatment with spectacles for bilateral high refractive amblyopia in children three to eight years of age. Children with previously untreated bilateral refractive amblyopia were enrolled. Bilateral high refractive amblyopia was defined as visual acuity (VA) being worse than 6/9 in both eyes in the presence of 5.00 D or more of hyperopia, 5.00 D or more of myopia and 2.00 D or more of astigmatism. Full myopic and astigmatic refractive errors were corrected, and the hyperopic refractive errors were corrected within 1.00 D of the full correction. All children received visual assessments at four-weekly intervals. VA, Worth four-dot test and Randot preschool stereotest were assessed at baseline and every four weeks for two years. Twenty-eight children with previously untreated bilateral high refractive amblyopia were enrolled. The mean VA at baseline was 0.39 ± 0.24 logMAR and it significantly improved to 0.21, 0.14, 0.11, 0.05 and 0.0 logMAR at four, eight, 12, 24 weeks and 18 months, respectively (all p = 0.001). The mean stereoacuity (SA) was 1,143 ± 617 arcsec at baseline and it significantly improved to 701, 532, 429, 211 and 98 arcsec at four, eight, 12, 24 weeks and 18 months, respectively (all p = 0.001). The time interval for VA achieving 6/6 was significantly shorter in the eyes of low spherical equivalent (SE) (-2.00 D < SE < +2.00 D) than in those of high SE (SE > +2.00 D) (3.33 ± 2.75 months versus 8.11 ± 4.56 months, p = 0.0005). All subjects had normal fusion on Worth four-dot test at baseline and all follow-up visits. Refractive correction with good spectacles compliance improves VA and SA in young children with bilateral high refractive amblyopia. Patients with greater amounts of refractive error will achieve resolution of amblyopia with a longer time. © 2016 Optometry Australia.
NASA Astrophysics Data System (ADS)
Rao, Xiong; Tang, Yunwei
2014-11-01
Land surface deformation evidently exists in a newly-built high-speed railway in the southeast of China. In this study, we utilize the Small BAseline Subsets (SBAS)-Differential Synthetic Aperture Radar Interferometry (DInSAR) technique to detect land surface deformation along the railway. In this work, 40 Cosmo-SkyMed satellite images were selected to analyze the spatial distribution and velocity of the deformation in study area. 88 pairs of image with high coherence were firstly chosen with an appropriate threshold. These images were used to deduce the deformation velocity map and the variation in time series. This result can provide information for orbit correctness and ground control point (GCP) selection in the following steps. Then, more pairs of image were selected to tighten the constraint in time dimension, and to improve the final result by decreasing the phase unwrapping error. 171 combinations of SAR pairs were ultimately selected. Reliable GCPs were re-selected according to the previously derived deformation velocity map. Orbital residuals error was rectified using these GCPs, and nonlinear deformation components were estimated. Therefore, a more accurate surface deformation velocity map was produced. Precise geodetic leveling work was implemented in the meantime. We compared the leveling result with the geocoding SBAS product using the nearest neighbour method. The mean error and standard deviation of the error were respectively 0.82 mm and 4.17 mm. This result demonstrates the effectiveness of DInSAR technique for monitoring land surface deformation, which can serve as a reliable decision for supporting highspeed railway project design, construction, operation and maintenance.
Smooth extrapolation of unknown anatomy via statistical shape models
NASA Astrophysics Data System (ADS)
Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.
2015-03-01
Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.
Localized landslide risk assessment with multi pass L band DInSAR analysis
NASA Astrophysics Data System (ADS)
Yun, HyeWon; Rack Kim, Jung; Lin, Shih-Yuan; Choi, YunSoo
2014-05-01
In terms of data availability and error correction, landslide forecasting by Differential Interferometric SAR (DInSAR) analysis is not easy task. Especially, the landslides by the anthropogenic construction activities frequently occurred in the localized cutting side of mountainous area. In such circumstances, it is difficult to attain sufficient enough accuracy because of the external factors inducing the error component in electromagnetic wave propagation. For instance, the local climate characteristics such as orographic effect and the proximity to water source can produce the significant anomalies in the water vapor distribution and consequently result in the error components of InSAR phase angle measurements. Moreover the high altitude parts of target area cause the stratified tropospheric delay error in DInSAR measurement. The other obstacle in DInSAR observation over the potential landside site is the vegetation canopy which causes the decorrelation of InSAR phase. Thus rather than C band sensor such as ENVISAT, ERS and RADARSAT, DInSAR analysis with L band ALOS PLASAR is more recommendable. Together with the introduction of L band DInSAR analysis, the improved DInSAR technique to cope all above obstacles is necessary. Thus we employed two approaches i.e. StaMPS/MTI (Stanford Method for Persistent Scatterers/Multi-Temporal InSAR, Hopper et al., 2007) which was newly developed for extracting the reliable deformation values through time series analysis and two pass DInSAR with the error term compensation based on the external weather information in this study. Since the water vapor observation from spaceborne radiometer is not feasible by the temporal gap in this case, the quantities from weather Research Forecasting (WRF) with 1 km spatial resolution was used to address the atmospheric phase error in two pass DInSAR analysis. Also it was observed that base DEM offset with time dependent perpendicular baselines of InSAR time series produce a significant error even in the advanced time series techniques such as StaMPS/MTI. We tried to compensate with the algorithmic base together with the usage of high resolution LIDAR DEM. The target area of this study is the eastern part of Korean peninsula centered. In there, the landslide originated by the geomorphic factors such as high sloped topography and localized torrential down pour is critical issue. The surface deformations from error corrected two pass DInSAR and StaMPS/MTI are crossly compared and validated with the landslide triggering factors such as vegetation, slope and geological properties. The study will be further extended for the application of future SAR sensors by incorporating the dynamic analysis of topography to implement practical landslide forecasting scheme.
On the development of voluntary and reflexive components in human saccade generation.
Fischer, B; Biscaldi, M; Gezeck, S
1997-04-18
The saccadic performance of a large number (n = 281) of subjects of different ages (8-70 years) was studied applying two saccade tasks: the prosaccade overlap (PO) task and the antisaccade gap (AG) task. From the PO task, the mean reaction times and the percentage of express saccades were determined for each subject. From the AG task, the mean reaction time of the correct antisaccades and of the erratic prosaccades were measured. In addition, we determined the error rate and the mean correction time, i.e. the time between the end of the first erratic prosaccade and the following corrective antisaccade. These variables were measured separately for stimuli presented (in random order) at the right or left side. While strong correlations were seen between variables for the right and left sides, considerable side asymmetries were obtained from many subjects. A factor analysis revealed that the seven variables (six eye movement variables plus age) were mainly determined by only two factors, V and F. The V factor was dominated by the variables from the AG task (reaction time, correction time, error rate) the F factor by variables from the PO task (reaction time, percentage express saccades) and the reaction time of the errors (prosaccades!) from the AG task. The relationship between the percentage number of express saccades and the percentage number of errors was completely asymmetric: high numbers of express saccades were accompanied by high numbers of errors but not vice versa. Only the variables in the V factor covaried with age. A fast decrease of the antisaccade reaction time (by 50 ms), of the correction times (by 70 ms) and of the error rate (from 60 to 22%) was observed between age 9 and 15 years, followed by a further period of slower decrease until age 25 years. The mean time a subject needed to reach the side opposite to the stimulus as required by the antisaccade task decreased from approximately 350 to 250 ms until age 15 years and decreased further by 20 ms before it increased again to approximately 280 ms. At higher ages, there was a slight indication for a return development. Subjects with high error rates had long antisaccade latencies and needed a long time to reach the opposite side on error trials. The variables obtained from the PO task varied also significantly with age but by smaller amounts. The results are discussed in relation to the subsystems controlling saccade generation: a voluntary and a reflex component the latter being suppressed by active fixation. Both systems seem to develop differentially. The data offer a detailed baseline for clinical studies using the pro- and antisaccade tasks as an indication of functional impairments, circumscribed brain lesions, neurological and psychiatric diseases and cognitive deficits.
Random measurement error: Why worry? An example of cardiovascular risk factors.
Brakenhoff, Timo B; van Smeden, Maarten; Visseren, Frank L J; Groenwold, Rolf H H
2018-01-01
With the increased use of data not originally recorded for research, such as routine care data (or 'big data'), measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error) is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate). For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.
Incorporating measurement error in n = 1 psychological autoregressive modeling.
Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.
Measuring rapid ocean tidal earth orientation variations with very long baseline interferometry
NASA Technical Reports Server (NTRS)
Sovers, O. J.; Jacobs, C. S.; Gross, R. S.
1993-01-01
Ocean tidal effects on universal time and polar motion (UTPM) are investigated at four nearly diurnal (K(sub 1), P(sub 1), O(sub 1), and Q(sub 1)) and four nearly semidiurnal (K(sub 2), S(sub 2), M(sub 2), and N(sub 2)) frequencies by analyzing very long baseline interferometry (VLBI) data extending from 1978 to 1992. We discuss limitations of comparisons between experiment and theory for the retograde nearly diurnal polar motion components due to their degeneracy with prograde components of the nutation model. Estimating amplitudes of contributions to the modeled VLBI observables at these eight frequencies produces a statistically highly significant improvement of 7 mm to the residuals of a fit to the observed delays. Use of such an improved UTPM model also reduces the 14-30 mm scatter of baseline lengths about a time-linear model of tectonic motion by 3-14 mm, also withhigh significance levels. A total of 28 UTPM ocean tidal amplitudes can be unambiguously estimated from the data, with resulting UTI and PM magnitudes as large as 21 micro secs and 270 microarc seconds and formal uncertainties of the order of 0.3 micro secs and 5 microarc secs for UTI and PM, respectively. Empirically determined UTPM amplitudes and phases are com1pared to values calculated theoretically by Gross from Seiler's global ocean tide model. The discrepancy between theory and experiment is larger by a factor of 3 for UTI amplitudes (9 micro secs) than for prograde PM amplitudes (42 microarc secs). The 14-year VLBI data span strongly attenuates the influence of mismodeled effects on estimated UTPM amplitudes and phases that are not coherent with the eight frequencies of interest. Magnitudes of coherent and quasi-coherent systematic errors are quantified by means of internal consistency tests. We conclude that coherent systematic effects are many times larger than the formal uncertainties and can be as large as 4 micro secs for UTI and 60 microarc secs for polar motion. On the basis of such ealistic error estimates, 22 of the 31 fitted UTPM ocean tidal amplitudes differ from zero by more than 2 sigma.
Measuring rapid ocean tidal earth orientation variations with very long baseline interferometry
NASA Astrophysics Data System (ADS)
Sovers, O. J.; Jacobs, C. S.; Gross, R. S.
1993-11-01
Ocean tidal effects on universal time and polar motion (UTPM) are investigated at four nearly diurnal (K1, P1, O1, and Q1) and four nearly semidiurnal (K2, S2, M2, and N2) frequencies by analyzing very long baseline interferometry (VLBI) data extending from 1978 to 1992. We discuss limitations of comparisons between experiment and theory for the retrograde nearly diurnal polar motion components due to their degeneracy with prograde components of the nutation model. Estimating amplitudes of contributions to the modeled VLBI observables at these eight frequencies produces a statistically highly significant improvement of 7 mm to the residuals of a fit to the observed delays. Use of such an improved UTPM model also reduces the 14-30 mm scatter of baseline lengths about a time-linear model of tectonic motion by 3-14 mm, also with high significance levels. A total of 28 UTPM ocean tidal amplitudes can be unambiguously estimated from the data, with resulting UT1 and PM magnitudes as large as 21 μs and 270 microarc seconds (μas) and formal uncertainties of the order of 0.3 μs and 5 μas for UTI and PM, respectively. Empirically determined UTPM amplitudes and phases are compared to values calculated theoretically by Gross from Seiler's global ocean tide model. The discrepancy between theory and experiment is larger by a factor of 3 for UT1 amplitudes (9 μs) than for prograde PM amplitudes (42 μas). The 14-year VLBI data span strongly attenuates the influence of mismodeled effects on estimated UTPM amplitudes and phases that are not coherent with the eight frequencies of interest. Magnitudes of coherent and quasi-coherent systematic errors are quantified by means of internal consistency tests. We conclude that coherent systematic effects are many times larger than the formal uncertainties and can be as large as 4 μs for UT1 and 60 μas for polar motion. On the basis of such realistic error estimates, 22 of the 31 fitted UTPM ocean tidal amplitudes differ from zero by more than 2σ.
TH-AB-201-07: Filmless Treatment Localization QA for the CyberKnife System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gersh, J; Spectrum Medical Physics, LLC, Greenville, SC; Noll, M
Purpose: Accuray recommends daily evaluation of the treatment localization and delivery systems (TLS/TDS) of the CyberKnife. The vendor-provided solution is a Winston-Lutz-type test that evaluates film shadows from an orthogonal beam pair (known as AQA). Since film-based techniques are inherently inefficient and potentially inconsistent and uncertain, this study explores a method which provides a comparable test with greater efficiency, consistency, and certainty. This test uses the QAStereoChecker (QASC, Standard Imaging, Inc., Middleton, WI), a high-resolution flat-panel detector with coupled fiducial markers for automated alignment. Fiducial tracking is used to achieve high translational and rotational position accuracy. Methods: A plan ismore » generated delivering five circular beams, with varying orientation and angular incidence. Several numeric quantities are calculated for each beam: eccentricity, centroid location, area, major-axis length, minor-axis length, and orientation angle. Baseline values were acquired and repeatability of baselines analyzed. Next, errors were induced in the path calibration of the CK, and the test repeated. A correlative study was performed between the induced errors and quantities measured using the QASC. Based on vendor recommendations, this test should be able to detect a TLS/TDS offset of 0.5mm. Results: Centroid shifts correlated well with induced plane-perpendicular offsets (p < 0.01). Induced vertical shifts correlated best with the absolute average deviation of eccentricities (p < 0.05). The values of these metrics which correlated with the threshold of 0.5mm induced deviation were used as individual pass/fail criteria. These were then used to evaluate induced offsets which shifted the CK in all axes (a clinically-realistic offset), with a total offset of 0.5mm. This test provided high and specificity and sensitivity. Conclusion: From setup to analysis, this filmless TLS/TDS test requires 4 minutes, as opposed to 15–20 minutes for film-based methods. The techniques introduced can potentially isolate errors in individual joints of the CK robot. Spectrum Medical Physics, LLC of Greenville, SC has a consulting contract with Standard Imaging of Middleton, WI.« less
NASA Astrophysics Data System (ADS)
Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu
2008-10-01
The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.