Sample records for baseline model error

  1. Baseline Error Analysis and Experimental Validation for Height Measurement of Formation Insar Satellite

    NASA Astrophysics Data System (ADS)

    Gao, X.; Li, T.; Zhang, X.; Geng, X.

    2018-04-01

    In this paper, we proposed the stochastic model of InSAR height measurement by considering the interferometric geometry of InSAR height measurement. The model directly described the relationship between baseline error and height measurement error. Then the simulation analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of baseline error to height measurement. Furthermore, the whole emulation validation of InSAR stochastic model was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were fully evaluated.

  2. Error Analysis for High Resolution Topography with Bi-Static Single-Pass SAR Interferometry

    NASA Technical Reports Server (NTRS)

    Muellerschoen, Ronald J.; Chen, Curtis W.; Hensley, Scott; Rodriguez, Ernesto

    2006-01-01

    We present a flow down error analysis from the radar system to topographic height errors for bi-static single pass SAR interferometry for a satellite tandem pair. Because of orbital dynamics the baseline length and baseline orientation evolve spatially and temporally, the height accuracy of the system is modeled as a function of the spacecraft position and ground location. Vector sensitivity equations of height and the planar error components due to metrology, media effects, and radar system errors are derived and evaluated globally for a baseline mission. Included in the model are terrain effects that contribute to layover and shadow and slope effects on height errors. The analysis also accounts for nonoverlapping spectra and the non-overlapping bandwidth due to differences between the two platforms' viewing geometries. The model is applied to a 514 km altitude 97.4 degree inclination tandem satellite mission with a 300 m baseline separation and X-band SAR. Results from our model indicate that global DTED level 3 can be achieved.

  3. Geodesy by radio interferometry - Effects of atmospheric modeling errors on estimates of baseline length

    NASA Technical Reports Server (NTRS)

    Davis, J. L.; Herring, T. A.; Shapiro, I. I.; Rogers, A. E. E.; Elgered, G.

    1985-01-01

    Analysis of very long baseline interferometry data indicates that systematic errors in prior estimates of baseline length, of order 5 cm for approximately 8000-km baselines, were due primarily to mismodeling of the electrical path length of the troposphere and mesosphere ('atmospheric delay'). Here observational evidence for the existence of such errors in the previously used models for the atmospheric delay is discussed, and a new 'mapping' function for the elevation angle dependence of this delay is developed. The delay predicted by this new mapping function differs from ray trace results by less than approximately 5 mm, at all elevations down to 5 deg elevation, and introduces errors into the estimates of baseline length of less than about 1 cm, for the multistation intercontinental experiment analyzed here.

  4. Study on the calibration and optimization of double theodolites baseline

    NASA Astrophysics Data System (ADS)

    Ma, Jing-yi; Ni, Jin-ping; Wu, Zhi-chao

    2018-01-01

    For the double theodolites measurement system baseline as the benchmark of the scale of the measurement system and affect the accuracy of the system, this paper puts forward a method for calibration and optimization of the double theodolites baseline. Using double theodolites to measure the known length of the reference ruler, and then reverse the baseline formula. Based on the error propagation law, the analyses show that the baseline error function is an important index to measure the accuracy of the system, and the reference ruler position, posture and so on have an impact on the baseline error. The optimization model is established and the baseline error function is used as the objective function, and optimizes the position and posture of the reference ruler. The simulation results show that the height of the reference ruler has no effect on the baseline error; the posture is not uniform; when the reference ruler is placed at x=500mm and y=1000mm in the measurement space, the baseline error is the smallest. The experimental results show that the experimental results are consistent with the theoretical analyses in the measurement space. In this paper, based on the study of the placement of the reference ruler, for improving the accuracy of the double theodolites measurement system has a reference value.

  5. Error Modeling of Multi-baseline Optical Truss. Part II; Application to SIM Metrology Truss Field Dependent Error

    NASA Technical Reports Server (NTRS)

    Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert

    2004-01-01

    The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices reside with the siderostat. The non-common vertex error (NCVE) is treated as a second example. Finally combination of models, and various other errors are discussed.

  6. Similarities in error processing establish a link between saccade prediction at baseline and adaptation performance.

    PubMed

    Wong, Aaron L; Shelhamer, Mark

    2014-05-01

    Adaptive processes are crucial in maintaining the accuracy of body movements and rely on error storage and processing mechanisms. Although classically studied with adaptation paradigms, evidence of these ongoing error-correction mechanisms should also be detectable in other movements. Despite this connection, current adaptation models are challenged when forecasting adaptation ability with measures of baseline behavior. On the other hand, we have previously identified an error-correction process present in a particular form of baseline behavior, the generation of predictive saccades. This process exhibits long-term intertrial correlations that decay gradually (as a power law) and are best characterized with the tools of fractal time series analysis. Since this baseline task and adaptation both involve error storage and processing, we sought to find a link between the intertrial correlations of the error-correction process in predictive saccades and the ability of subjects to alter their saccade amplitudes during an adaptation task. Here we find just such a relationship: the stronger the intertrial correlations during prediction, the more rapid the acquisition of adaptation. This reinforces the links found previously between prediction and adaptation in motor control and suggests that current adaptation models are inadequate to capture the complete dynamics of these error-correction processes. A better understanding of the similarities in error processing between prediction and adaptation might provide the means to forecast adaptation ability with a baseline task. This would have many potential uses in physical therapy and the general design of paradigms of motor adaptation. Copyright © 2014 the American Physiological Society.

  7. Accounting for baseline differences and measurement error in the analysis of change over time.

    PubMed

    Braun, Julia; Held, Leonhard; Ledergerber, Bruno

    2014-01-15

    If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Modeling Nonlinear Errors in Surface Electromyography Due To Baseline Noise: A New Methodology

    PubMed Central

    Law, Laura Frey; Krishnan, Chandramouli; Avin, Keith

    2010-01-01

    The surface electromyographic (EMG) signal is often contaminated by some degree of baseline noise. It is customary for scientists to subtract baseline noise from the measured EMG signal prior to further analyses based on the assumption that baseline noise adds linearly to the observed EMG signal. The stochastic nature of both the baseline and EMG signal, however, may invalidate this assumption. Alternately, “true” EMG signals may be either minimally or nonlinearly affected by baseline noise. This information is particularly relevant at low contraction intensities when signal-to-noise ratios (SNR) may be lowest. Thus, the purpose of this simulation study was to investigate the influence of varying levels of baseline noise (approximately 2 – 40 % maximum EMG amplitude) on mean EMG burst amplitude and to assess the best means to account for signal noise. The simulations indicated baseline noise had minimal effects on mean EMG activity for maximum contractions, but increased nonlinearly with increasing noise levels and decreasing signal amplitudes. Thus, the simple baseline noise subtraction resulted in substantial error when estimating mean activity during low intensity EMG bursts. Conversely, correcting EMG signal as a nonlinear function of both baseline and measured signal amplitude provided highly accurate estimates of EMG amplitude. This novel nonlinear error modeling approach has potential implications for EMG signal processing, particularly when assessing co-activation of antagonist muscles or small amplitude contractions where the SNR can be low. PMID:20869716

  9. Comparing Error Correction Procedures for Children Diagnosed with Autism

    ERIC Educational Resources Information Center

    Townley-Cochran, Donna; Leaf, Justin B.; Leaf, Ronald; Taubman, Mitchell; McEachin, John

    2017-01-01

    The purpose of this study was to examine the effectiveness of two error correction (EC) procedures: modeling alone and the use of an error statement plus modeling. Utilizing an alternating treatments design nested into a multiple baseline design across participants, we sought to evaluate and compare the effects of these two EC procedures used to…

  10. Dynamic performance of an aero-assist spacecraft - AFE

    NASA Technical Reports Server (NTRS)

    Chang, Ho-Pen; French, Raymond A.

    1992-01-01

    Dynamic performance of the Aero-assist Flight Experiment (AFE) spacecraft was investigated using a high-fidelity 6-DOF simulation model. Baseline guidance logic, control logic, and a strapdown navigation system to be used on the AFE spacecraft are also modeled in the 6-DOF simulation. During the AFE mission, uncertainties in the environment and the spacecraft are described by an error space which includes both correlated and uncorrelated error sources. The principal error sources modeled in this study include navigation errors, initial state vector errors, atmospheric variations, aerodynamic uncertainties, center-of-gravity off-sets, and weight uncertainties. The impact of the perturbations on the spacecraft performance is investigated using Monte Carlo repetitive statistical techniques. During the Solid Rocket Motor (SRM) deorbit phase, a target flight path angle of -4.76 deg at entry interface (EI) offers very high probability of avoiding SRM casing skip-out from the atmosphere. Generally speaking, the baseline designs of the guidance, navigation, and control systems satisfy most of the science and mission requirements.

  11. Cocaine Dependence Treatment Data: Methods for Measurement Error Problems With Predictors Derived From Stationary Stochastic Processes

    PubMed Central

    Guan, Yongtao; Li, Yehua; Sinha, Rajita

    2011-01-01

    In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854

  12. Pricing and hedging derivative securities with neural networks: Bayesian regularization, early stopping, and bagging.

    PubMed

    Gençay, R; Qi, M

    2001-01-01

    We study the effectiveness of cross validation, Bayesian regularization, early stopping, and bagging to mitigate overfitting and improving generalization for pricing and hedging derivative securities with daily S&P 500 index daily call options from January 1988 to December 1993. Our results indicate that Bayesian regularization can generate significantly smaller pricing and delta-hedging errors than the baseline neural-network (NN) model and the Black-Scholes model for some years. While early stopping does not affect the pricing errors, it significantly reduces the hedging error (HE) in four of the six years we investigated. Although computationally most demanding, bagging seems to provide the most accurate pricing and delta hedging. Furthermore, the standard deviation of the MSPE of bagging is far less than that of the baseline model in all six years, and the standard deviation of the average HE of bagging is far less than that of the baseline model in five out of six years. We conclude that they be used at least in cases when no appropriate hints are available.

  13. Geodetic positioning using a global positioning system of satellites

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1980-01-01

    Geodetic positioning using range, integrated Doppler, and interferometric observations from a constellation of twenty-four Global Positioning System satellites is analyzed. A summary of the proposals for geodetic positioning and baseline determination is given which includes a description of measurement techniques and comments on rank deficiency and error sources. An analysis of variance comparison of range, Doppler, and interferometric time delay to determine their relative geometric strength for baseline determination is included. An analytic examination to the effect of a priori constraints on positioning using simultaneous observations from two stations is presented. Dynamic point positioning and baseline determination using range and Doppler is examined in detail. Models for the error sources influencing dynamic positioning are developed. Included is a discussion of atomic clock stability, and range and Doppler observation error statistics based on random correlated atomic clock error are derived.

  14. Some unexamined aspects of analysis of covariance in pretest-posttest studies.

    PubMed

    Ganju, Jitendra

    2004-09-01

    The use of an analysis of covariance (ANCOVA) model in a pretest-posttest setting deserves to be studied separately from its use in other (non-pretest-posttest) settings. For pretest-posttest studies, the following points are made in this article: (a) If the familiar change from baseline model accurately describes the data-generating mechanism for a randomized study then it is impossible for unequal slopes to exist. Conversely, if unequal slopes exist, then it implies that the change from baseline model as a data-generating mechanism is inappropriate. An alternative data-generating model should be identified and the validity of the ANCOVA model should be demonstrated. (b) Under the usual assumptions of equal pretest and posttest within-subject error variances, the ratio of the standard error of a treatment contrast from a change from baseline analysis to that from ANCOVA is less than 2(1)/(2). (c) For an observational study it is possible for unequal slopes to exist even if the change from baseline model describes the data-generating mechanism. (d) Adjusting for the pretest variable in observational studies may actually introduce bias where none previously existed.

  15. Atmospheric refraction effects on baseline error in satellite laser ranging systems

    NASA Technical Reports Server (NTRS)

    Im, K. E.; Gardner, C. S.

    1982-01-01

    Because of the mathematical complexities involved in exact analyses of baseline errors, it is not easy to isolate atmospheric refraction effects; however, by making certain simplifying assumptions about the ranging system geometry, relatively simple expressions can be derived which relate the baseline errors directly to the refraction errors. The results indicate that even in the absence of other errors, the baseline error for intercontinental baselines can be more than an order of magnitude larger than the refraction error.

  16. Effect of suspension kinematic on 14 DOF vehicle model

    NASA Astrophysics Data System (ADS)

    Wongpattananukul, T.; Chantharasenawong, C.

    2017-12-01

    Computer simulations play a major role in shaping modern science and engineering. They reduce time and resource consumption in new studies and designs. Vehicle simulations have been studied extensively to achieve a vehicle model used in minimum lap time solution. Simulation result accuracy depends on the abilities of these models to represent real phenomenon. Vehicles models with 7 degrees of freedom (DOF), 10 DOF and 14 DOF are normally used in optimal control to solve for minimum lap time. However, suspension kinematics are always neglected on these models. Suspension kinematics are defined as wheel movements with respect to the vehicle body. Tire forces are expressed as a function of wheel slip and wheel position. Therefore, the suspension kinematic relation is appended to the 14 DOF vehicle model to investigate its effects on the accuracy of simulate trajectory. Classical 14 DOF vehicle model is chosen as baseline model. Experiment data is collected from formula student style car test runs as baseline data for simulation and comparison between baseline model and model with suspension kinematic. Results show that in a single long turn there is an accumulated trajectory error in baseline model compared to model with suspension kinematic. While in short alternate turns, the trajectory error is much smaller. These results show that suspension kinematic had an effect on the trajectory simulation of vehicle. Which optimal control that use baseline model will result in inaccuracy control scheme.

  17. Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, A.; Jacobs, C. S.; Ratcliff, J. T.

    2012-01-01

    The standard VLBI analysis models the distribution of measurement noise as Gaussian. Because the price of recording bits is steadily decreasing, thermal errors will soon no longer dominate. As a result, it is expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become increasingly relevant for optimal analysis. We discuss the advantages of modeling the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen flow assumption pioneered by Treuhaft and Lanyi. We then apply these correlated noise spectra to the weighting of VLBI data analysis for two case studies: X/Ka-band global astrometry and Earth orientation. In both cases we see improved results when the analyses are weighted with correlated noise models vs. the standard uncorrelated models. The X/Ka astrometric scatter improved by approx.10% and the systematic Delta delta vs. delta slope decreased by approx. 50%. The TEMPO Earth orientation results improved by 17% in baseline transverse and 27% in baseline vertical.

  18. Error modeling for differential GPS. M.S. Thesis - MIT, 12 May 1995

    NASA Technical Reports Server (NTRS)

    Blerman, Gregory S.

    1995-01-01

    Differential Global Positioning System (DGPS) positioning is used to accurately locate a GPS receiver based upon the well-known position of a reference site. In utilizing this technique, several error sources contribute to position inaccuracy. This thesis investigates the error in DGPS operation and attempts to develop a statistical model for the behavior of this error. The model for DGPS error is developed using GPS data collected by Draper Laboratory. The Marquardt method for nonlinear curve-fitting is used to find the parameters of a first order Markov process that models the average errors from the collected data. The results show that a first order Markov process can be used to model the DGPS error as a function of baseline distance and time delay. The model's time correlation constant is 3847.1 seconds (1.07 hours) for the mean square error. The distance correlation constant is 122.8 kilometers. The total process variance for the DGPS model is 3.73 sq meters.

  19. Model-based cost-effectiveness analysis of interventions aimed at preventing medication error at hospital admission (medicines reconciliation).

    PubMed

    Karnon, Jonathan; Campbell, Fiona; Czoski-Murray, Carolyn

    2009-04-01

    Medication errors can lead to preventable adverse drug events (pADEs) that have significant cost and health implications. Errors often occur at care interfaces, and various interventions have been devised to reduce medication errors at the point of admission to hospital. The aim of this study is to assess the incremental costs and effects [measured as quality adjusted life years (QALYs)] of a range of such interventions for which evidence of effectiveness exists. A previously published medication errors model was adapted to describe the pathway of errors occurring at admission through to the occurrence of pADEs. The baseline model was populated using literature-based values, and then calibrated to observed outputs. Evidence of effects was derived from a systematic review of interventions aimed at preventing medication error at hospital admission. All five interventions, for which evidence of effectiveness was identified, are estimated to be extremely cost-effective when compared with the baseline scenario. Pharmacist-led reconciliation intervention has the highest expected net benefits, and a probability of being cost-effective of over 60% by a QALY value of pound10 000. The medication errors model provides reasonably strong evidence that some form of intervention to improve medicines reconciliation is a cost-effective use of NHS resources. The variation in the reported effectiveness of the few identified studies of medication error interventions illustrates the need for extreme attention to detail in the development of interventions, but also in their evaluation and may justify the primary evaluation of more than one specification of included interventions.

  20. Self-organizing radial basis function networks for adaptive flight control and aircraft engine state estimation

    NASA Astrophysics Data System (ADS)

    Shankar, Praveen

    The performance of nonlinear control algorithms such as feedback linearization and dynamic inversion is heavily dependent on the fidelity of the dynamic model being inverted. Incomplete or incorrect knowledge of the dynamics results in reduced performance and may lead to instability. Augmenting the baseline controller with approximators which utilize a parametrization structure that is adapted online reduces the effect of this error between the design model and actual dynamics. However, currently existing parameterizations employ a fixed set of basis functions that do not guarantee arbitrary tracking error performance. To address this problem, we develop a self-organizing parametrization structure that is proven to be stable and can guarantee arbitrary tracking error performance. The training algorithm to grow the network and adapt the parameters is derived from Lyapunov theory. In addition to growing the network of basis functions, a pruning strategy is incorporated to keep the size of the network as small as possible. This algorithm is implemented on a high performance flight vehicle such as F-15 military aircraft. The baseline dynamic inversion controller is augmented with a Self-Organizing Radial Basis Function Network (SORBFN) to minimize the effect of the inversion error which may occur due to imperfect modeling, approximate inversion or sudden changes in aircraft dynamics. The dynamic inversion controller is simulated for different situations including control surface failures, modeling errors and external disturbances with and without the adaptive network. A performance measure of maximum tracking error is specified for both the controllers a priori. Excellent tracking error minimization to a pre-specified level using the adaptive approximation based controller was achieved while the baseline dynamic inversion controller failed to meet this performance specification. The performance of the SORBFN based controller is also compared to a fixed RBF network based adaptive controller. While the fixed RBF network based controller which is tuned to compensate for control surface failures fails to achieve the same performance under modeling uncertainty and disturbances, the SORBFN is able to achieve good tracking convergence under all error conditions.

  1. Statistical error model for a solar electric propulsion thrust subsystem

    NASA Technical Reports Server (NTRS)

    Bantell, M. H.

    1973-01-01

    The solar electric propulsion thrust subsystem statistical error model was developed as a tool for investigating the effects of thrust subsystem parameter uncertainties on navigation accuracy. The model is currently being used to evaluate the impact of electric engine parameter uncertainties on navigation system performance for a baseline mission to Encke's Comet in the 1980s. The data given represent the next generation in statistical error modeling for low-thrust applications. Principal improvements include the representation of thrust uncertainties and random process modeling in terms of random parametric variations in the thrust vector process for a multi-engine configuration.

  2. The effect of the dynamic wet troposphere on VLBI measurements

    NASA Technical Reports Server (NTRS)

    Treuhaft, R. N.; Lanyi, G. E.

    1986-01-01

    Calculations using a statistical model of water vapor fluctuations yield the effect of the dynamic wet troposphere on Very Long Baseline Interferometry (VLBI) measurements. The statistical model arises from two primary assumptions: (1) the spatial structure of refractivity fluctuations can be closely approximated by elementary (Kolmogorov) turbulence theory, and (2) temporal fluctuations are caused by spatial patterns which are moved over a site by the wind. The consequences of these assumptions are outlined for the VLBI delay and delay rate observables. For example, wet troposphere induced rms delays for Deep Space Network (DSN) VLBI at 20-deg elevation are about 3 cm of delay per observation, which is smaller, on the average, than other known error sources in the current DSN VLBI data set. At 20-deg elevation for 200-s time intervals, water vapor induces approximately 1.5 x 10 to the minus 13th power s/s in the Allan standard deviation of interferometric delay, which is a measure of the delay rate observable error. In contrast to the delay error, the delay rate measurement error is dominated by water vapor fluctuations. Water vapor induced VLBI parameter errors and correlations are calculated. For the DSN, baseline length parameter errors due to water vapor fluctuations are in the range of 3 to 5 cm. The above physical assumptions also lead to a method for including the water vapor fluctuations in the parameter estimation procedure, which is used to extract baseline and source information from the VLBI observables.

  3. Very long baseline interferometry applied to polar motion, relativity, and geodesy. Ph. D. thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, C.

    1978-01-01

    The causes and effects of diurnal polar motion are described. An algorithm was developed for modeling the effects on very long baseline interferometry observables. A selection was made between two three-station networks for monitoring polar motion. The effects of scheduling and the number of sources observed on estimated baseline errors are discussed. New hardware and software techniques in very long baseline interferometry are described.

  4. Smoothness of In vivo Spectral Baseline Determined by Mean Squared Error

    PubMed Central

    Zhang, Yan; Shen, Jun

    2013-01-01

    Purpose A nonparametric smooth line is usually added to spectral model to account for background signals in vivo magnetic resonance spectroscopy (MRS). The assumed smoothness of the baseline significantly influences quantitative spectral fitting. In this paper, a method is proposed to minimize baseline influences on estimated spectral parameters. Methods In this paper, the non-parametric baseline function with a given smoothness was treated as a function of spectral parameters. Its uncertainty was measured by root-mean-squared error (RMSE). The proposed method was demonstrated with a simulated spectrum and in vivo spectra of both short echo time (TE) and averaged echo times. The estimated in vivo baselines were compared with the metabolite-nulled spectra, and the LCModel-estimated baselines. The accuracies of estimated baseline and metabolite concentrations were further verified by cross-validation. Results An optimal smoothness condition was found that led to the minimal baseline RMSE. In this condition, the best fit was balanced against minimal baseline influences on metabolite concentration estimates. Conclusion Baseline RMSE can be used to indicate estimated baseline uncertainties and serve as the criterion for determining the baseline smoothness of in vivo MRS. PMID:24259436

  5. Comparison of Predictive Modeling Methods of Aircraft Landing Speed

    NASA Technical Reports Server (NTRS)

    Diallo, Ousmane H.

    2012-01-01

    Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.

  6. Picometer Level Modeling of a Shared Vertex Double Corner Cube in the Space Interferometry Mission Kite Testbed

    NASA Technical Reports Server (NTRS)

    Kuan, Gary M.; Dekens, Frank G.

    2006-01-01

    The Space Interferometry Mission (SIM) is a microarcsecond interferometric space telescope that requires picometer level precision measurements of its truss and interferometer baselines. Single-gauge metrology errors due to non-ideal physical characteristics of corner cubes reduce the angular measurement capability of the science instrument. Specifically, the non-common vertex error (NCVE) of a shared vertex, double corner cube introduces micrometer level single-gauge errors in addition to errors due to dihedral angles and reflection phase shifts. A modified SIM Kite Testbed containing an articulating double corner cube is modeled and the results are compared to the experimental testbed data. The results confirm modeling capability and viability of calibration techniques.

  7. Tropospheric delay ray tracing applied in VLBI analysis

    NASA Astrophysics Data System (ADS)

    Eriksson, David; MacMillan, D. S.; Gipson, John M.

    2014-12-01

    Tropospheric delay modeling error continues to be one of the largest sources of error in VLBI (very long baseline interferometry) analysis. For standard operational solutions, we use the VMF1 elevation-dependent mapping functions derived from European Centre for Medium-Range Weather Forecasts data. These mapping functions assume that tropospheric delay at a site is azimuthally symmetric. As this assumption is not true, we have instead determined the ray trace delay along the signal path through the troposphere for each VLBI quasar observation. We determined the troposphere refractivity fields from the pressure, temperature, specific humidity, and geopotential height fields of the NASA Goddard Space Flight Center Goddard Earth Observing System version 5 numerical weather model. When applied in VLBI analysis, baseline length repeatabilities were improved compared with using the VMF1 mapping function model for 72% of the baselines and site vertical repeatabilities were better for 11 of 13 sites during the 2 week CONT11 observing period in September 2011. When applied to a larger data set (2011-2013), we see a similar improvement in baseline length and also in site position repeatabilities for about two thirds of the stations in each of the site topocentric components.

  8. Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach

    NASA Astrophysics Data System (ADS)

    Bähr, Hermann; Hanssen, Ramon F.

    2012-12-01

    An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.

  9. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    PubMed Central

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  10. Network Adjustment of Orbit Errors in SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Bahr, Hermann; Hanssen, Ramon

    2010-03-01

    Orbit errors can induce significant long wavelength error signals in synthetic aperture radar (SAR) interferograms and thus bias estimates of wide-scale deformation phenomena. The presented approach aims for correcting orbit errors in a preprocessing step to deformation analysis by modifying state vectors. Whereas absolute errors in the orbital trajectory are negligible, the influence of relative errors (baseline errors) is parametrised by their parallel and perpendicular component as a linear function of time. As the sensitivity of the interferometric phase is only significant with respect to the perpendicular base-line and the rate of change of the parallel baseline, the algorithm focuses on estimating updates to these two parameters. This is achieved by a least squares approach, where the unwrapped residual interferometric phase is observed and atmospheric contributions are considered to be stochastic with constant mean. To enhance reliability, baseline errors are adjusted in an overdetermined network of interferograms, yielding individual orbit corrections per acquisition.

  11. Deformation Time-Series of the Lost-Hills Oil Field using a Multi-Baseline Interferometric SAR Inversion Algorithm with Finite Difference Smoothing Constraints

    NASA Astrophysics Data System (ADS)

    Werner, C. L.; Wegmüller, U.; Strozzi, T.

    2012-12-01

    The Lost-Hills oil field located in Kern County,California ranks sixth in total remaining reserves in California. Hundreds of densely packed wells characterize the field with one well every 5000 to 20000 square meters. Subsidence due to oil extraction can be grater than 10 cm/year and is highly variable both in space and time. The RADARSAT-1 SAR satellite collected data over this area with a 24-day repeat during a 2 year period spanning 2002-2004. Relatively high interferometric correlation makes this an excellent region for development and test of deformation time-series inversion algorithms. Errors in deformation time series derived from a stack of differential interferograms are primarily due to errors in the digital terrain model, interferometric baselines, variability in tropospheric delay, thermal noise and phase unwrapping errors. Particularly challenging is separation of non-linear deformation from variations in troposphere delay and phase unwrapping errors. In our algorithm a subset of interferometric pairs is selected from a set of N radar acquisitions based on criteria of connectivity, time interval, and perpendicular baseline. When possible, the subset consists of temporally connected interferograms, otherwise the different groups of interferograms are selected to overlap in time. The maximum time interval is constrained to be less than a threshold value to minimize phase gradients due to deformation as well as minimize temporal decorrelation. Large baselines are also avoided to minimize the consequence of DEM errors on the interferometric phase. Based on an extension of the SVD based inversion described by Lee et al. ( USGS Professional Paper 1769), Schmidt and Burgmann (JGR, 2003), and the earlier work of Berardino (TGRS, 2002), our algorithm combines estimation of the DEM height error with a set of finite difference smoothing constraints. A set of linear equations are formulated for each spatial point that are functions of the deformation velocities during the time intervals spanned by the interferogram and a DEM height correction. The sensitivity of the phase to the height correction depends on the length of the perpendicular baseline of each interferogram. This design matrix is augmented with a set of additional weighted constraints on the acceleration that penalize rapid velocity variations. The weighting factor γ can be varied from 0 (no smoothing) to a large values (> 10) that yield an essentially linear time-series solution. The factor can be tuned to take into account a priori knowledge of the deformation non-linearity. The difference between the time-series solution and the unconstrained time-series can be interpreted as due to a combination of tropospheric path delay and baseline error. Spatial smoothing of the residual phase leads to an improved atmospheric model that can be fed back into the model and iterated. Our analysis shows non-linear deformation related to changes in the oil extraction as well as local height corrections improving on the low resolution 3 arc-sec SRTM DEM.

  12. Characteristics of the BDS Carrier Phase Multipath and Its Mitigation Methods in Relative Positioning

    PubMed Central

    Dai, Wujiao; Shi, Qiang; Cai, Changsheng

    2017-01-01

    The carrier phase multipath effect is one of the most significant error sources in the precise positioning of BeiDou Navigation Satellite System (BDS). We analyzed the characteristics of BDS multipath, and found the multipath errors of geostationary earth orbit (GEO) satellite signals are systematic, whereas those of inclined geosynchronous orbit (IGSO) or medium earth orbit (MEO) satellites are both systematic and random. The modified multipath mitigation methods, including sidereal filtering algorithm and multipath hemispherical map (MHM) model, were used to improve BDS dynamic deformation monitoring. The results indicate that the sidereal filtering methods can reduce the root mean square (RMS) of positioning errors in the east, north and vertical coordinate directions by 15%, 37%, 25% and 18%, 51%, 27% in the coordinate and observation domains, respectively. By contrast, the MHM method can reduce the RMS by 22%, 52% and 27% on average. In addition, the BDS multipath errors in static baseline solutions are a few centimeters in multipath-rich environments, which is different from that of Global Positioning System (GPS) multipath. Therefore, we add a parameter representing the GEO multipath error in observation equation to the adjustment model to improve the precision of BDS static baseline solutions. And the results show that the modified model can achieve an average precision improvement of 82%, 54% and 68% in the east, north and up coordinate directions, respectively. PMID:28387744

  13. Characteristics of the BDS Carrier Phase Multipath and Its Mitigation Methods in Relative Positioning.

    PubMed

    Dai, Wujiao; Shi, Qiang; Cai, Changsheng

    2017-04-07

    The carrier phase multipath effect is one of the most significant error sources in the precise positioning of BeiDou Navigation Satellite System (BDS). We analyzed the characteristics of BDS multipath, and found the multipath errors of geostationary earth orbit (GEO) satellite signals are systematic, whereas those of inclined geosynchronous orbit (IGSO) or medium earth orbit (MEO) satellites are both systematic and random. The modified multipath mitigation methods, including sidereal filtering algorithm and multipath hemispherical map (MHM) model, were used to improve BDS dynamic deformation monitoring. The results indicate that the sidereal filtering methods can reduce the root mean square (RMS) of positioning errors in the east, north and vertical coordinate directions by 15%, 37%, 25% and 18%, 51%, 27% in the coordinate and observation domains, respectively. By contrast, the MHM method can reduce the RMS by 22%, 52% and 27% on average. In addition, the BDS multipath errors in static baseline solutions are a few centimeters in multipath-rich environments, which is different from that of Global Positioning System (GPS) multipath. Therefore, we add a parameter representing the GEO multipath error in observation equation to the adjustment model to improve the precision of BDS static baseline solutions. And the results show that the modified model can achieve an average precision improvement of 82%, 54% and 68% in the east, north and up coordinate directions, respectively.

  14. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    PubMed Central

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Cai, Weixing; Rottmann, Joerg; Li, Ruijiang; Williams, Christopher; Wagar, Matthew; Berbeco, Ross; Ionascu, Dan; Lewis, John H.

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we develop and perform initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and use these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparing to ground truth digital and physical phantom images. The performance of 4DCBCT- and 4DCT- based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms, and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. PMID:25905722

  15. Very Long Baseline Interferometry Applied to Polar Motion, Relativity and Geodesy. Ph.D. Thesis - Maryland Univ.

    NASA Technical Reports Server (NTRS)

    Ma, C.

    1978-01-01

    The causes and effects of diurnal polar motion are described. An algorithm is developed for modeling the effects on very long baseline interferometry observables. Five years of radio-frequency very long baseline interferometry data from stations in Massachusetts, California, and Sweden are analyzed for diurnal polar motion. It is found that the effect is larger than predicted by McClure. Corrections to the standard nutation series caused by the deformability of the earth have a significant effect on the estimated diurnal polar motion scaling factor and the post-fit residual scatter. Simulations of high precision very long baseline interferometry experiments taking into account both measurement uncertainty and modeled errors are described.

  16. Developing Best Practices for Detecting Change at Marine Renewable Energy Sites

    NASA Astrophysics Data System (ADS)

    Linder, H. L.; Horne, J. K.

    2016-02-01

    In compliance with the National Environmental Policy Act (NEPA), an evaluation of environmental effects is mandatory for obtaining permits for any Marine Renewable Energy (MRE) project in the US. Evaluation includes an assessment of baseline conditions and on-going monitoring during operation to determine if biological conditions change relative to the baseline. Currently, there are no best practices for the analysis of MRE monitoring data. We have developed an approach to evaluate and recommend analytic models used to characterize and detect change in biological monitoring data. The approach includes six steps: review current MRE monitoring practices, identify candidate models to analyze data, fit models to a baseline dataset, develop simulated scenarios of change, evaluate model fit to simulated data, and produce recommendations on the choice of analytic model for monitoring data. An empirical data set from a proposed tidal turbine site at Admiralty Inlet, Puget Sound, Washington was used to conduct the model evaluation. Candidate models that were evaluated included: linear regression, time series, and nonparametric models. Model fit diagnostics Root-Mean-Square-Error and Mean-Absolute-Scaled-Error were used to measure accuracy of predicted values from each model. A power analysis was used to evaluate the ability of each model to measure and detect change from baseline conditions. As many of these models have yet to be applied in MRE monitoring studies, results of this evaluation will generate comprehensive guidelines on choice of model to detect change in environmental monitoring data from MRE sites. The creation of standardized guidelines for model selection enables accurate comparison of change between life stages of a MRE project, within life stages to meet real time regulatory requirements, and comparison of environmental changes among MRE sites.

  17. Performance analysis of an integrated GPS/inertial attitude determination system. M.S. Thesis - MIT

    NASA Technical Reports Server (NTRS)

    Sullivan, Wendy I.

    1994-01-01

    The performance of an integrated GPS/inertial attitude determination system is investigated using a linear covariance analysis. The principles of GPS interferometry are reviewed, and the major error sources of both interferometers and gyroscopes are discussed and modeled. A new figure of merit, attitude dilution of precision (ADOP), is defined for two possible GPS attitude determination methods, namely single difference and double difference interferometry. Based on this figure of merit, a satellite selection scheme is proposed. The performance of the integrated GPS/inertial attitude determination system is determined using a linear covariance analysis. Based on this analysis, it is concluded that the baseline errors (i.e., knowledge of the GPS interferometer baseline relative to the vehicle coordinate system) are the limiting factor in system performance. By reducing baseline errors, it should be possible to use lower quality gyroscopes without significantly reducing performance. For the cases considered, single difference interferometry is only marginally better than double difference interferometry. Finally, the performance of the system is found to be relatively insensitive to the satellite selection technique.

  18. Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals

    NASA Astrophysics Data System (ADS)

    Goswami, S.; Flury, J.

    2016-12-01

    In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.

  19. Modeling, Analysis, and Control of Demand Response Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathieu, Johanna L.

    2012-05-01

    While the traditional goal of an electric power system has been to control supply to fulfill demand, the demand-side can plan an active role in power systems via Demand Response (DR), defined by the Department of Energy (DOE) as “a tariff or program established to motivate changes in electric use by end-use customers in response to changes in the price of electricity over time, or to give incentive payments designed to induce lower electricity use at times of high market prices or when grid reliability is jeopardized” [29]. DR can provide a variety of benefits including reducing peak electric loadsmore » when the power system is stressed and fast timescale energy balancing. Therefore, DR can improve grid reliability and reduce wholesale energy prices and their volatility. This dissertation focuses on analyzing both recent and emerging DR paradigms. Recent DR programs have focused on peak load reduction in commercial buildings and industrial facilities (C&I facilities). We present methods for using 15-minute-interval electric load data, commonly available from C&I facilities, to help building managers understand building energy consumption and ‘ask the right questions’ to discover opportunities for DR. Additionally, we present a regression-based model of whole building electric load, i.e., a baseline model, which allows us to quantify DR performance. We use this baseline model to understand the performance of 38 C&I facilities participating in an automated dynamic pricing DR program in California. In this program, facilities are expected to exhibit the same response each DR event. We find that baseline model error makes it difficult to precisely quantify changes in electricity consumption and understand if C&I facilities exhibit event-to-event variability in their response to DR signals. Therefore, we present a method to compute baseline model error and a metric to determine how much observed DR variability results from baseline model error rather than real variability in response. We find that, in general, baseline model error is large. Though some facilities exhibit real DR variability, most observed variability results from baseline model error. In some cases, however, aggregations of C&I facilities exhibit real DR variability, which could create challenges for power system operation. These results have implications for DR program design and deployment. Emerging DR paradigms focus on faster timescale DR. Here, we investigate methods to coordinate aggregations of residential thermostatically controlled loads (TCLs), including air conditioners and refrigerators, to manage frequency and energy imbalances in power systems. We focus on opportunities to centrally control loads with high accuracy but low requirements for sensing and communications infrastructure. Specifically, we compare cases when measured load state information (e.g., power consumption and temperature) is 1) available in real time; 2) available, but not in real time; and 3) not available. We develop Markov Chain models to describe the temperature state evolution of heterogeneous populations of TCLs, and use Kalman filtering for both state and joint parameter/state estimation. We present a look-ahead proportional controller to broadcast control signals to all TCLs, which always remain in their temperature dead-band. Simulations indicate that it is possible to achieve power tracking RMS errors in the range of 0.26–9.3% of steady state aggregated power consumption. Results depend upon the information available for system identification, state estimation, and control. We find that, depending upon the performance required, TCLs may not need to provide state information to the central controller in real time or at all. We also estimate the size of the TCL potential resource; potential revenue from participation in markets; and break-even costs associated with deploying DR-enabling technologies. We find that current TCL energy storage capacity in California is 8–11 GWh, with refrigerators contributing the most. Annual revenues from participation in regulation vary from $10 to $220 per TCL per year depending upon the type of TCL and climate zone, while load following and arbitrage revenues are more modest at $2 to $35 per TCL per year. These results lead to a number of policy recommendations that will make it easier to engage residential loads in fast timescale DR.« less

  20. A statistical study of radio-source structure effects on astrometric very long baseline interferometry observations

    NASA Technical Reports Server (NTRS)

    Ulvestad, J. S.

    1989-01-01

    Errors from a number of sources in astrometric very long baseline interferometry (VLBI) have been reduced in recent years through a variety of methods of calibration and modeling. Such reductions have led to a situation in which the extended structure of the natural radio sources used in VLBI is a significant error source in the effort to improve the accuracy of the radio reference frame. In the past, work has been done on individual radio sources to establish the magnitude of the errors caused by their particular structures. The results of calculations on 26 radio sources are reported in which an effort is made to determine the typical delay and delay-rate errors for a number of sources having different types of structure. It is found that for single observations of the types of radio sources present in astrometric catalogs, group-delay and phase-delay scatter in the 50 to 100 psec range due to source structure can be expected at 8.4 GHz on the intercontinental baselines available in the Deep Space Network (DSN). Delay-rate scatter of approx. 5 x 10(exp -15) sec sec(exp -1) (or approx. 0.002 mm sec (exp -1) is also expected. If such errors mapped directly into source position errors, they would correspond to position uncertainties of approx. 2 to 5 nrad, similar to the best position determinations in the current JPL VLBI catalog. With the advent of wider bandwidth VLBI systems on the large DSN antennas, the system noise will be low enough so that the structure-induced errors will be a significant part of the error budget. Several possibilities for reducing the structure errors are discussed briefly, although it is likely that considerable effort will have to be devoted to the structure problem in order to reduce the typical error by a factor of two or more.

  1. Should Studies of Diabetes Treatment Stratification Correct for Baseline HbA1c?

    PubMed Central

    Jones, Angus G.; Lonergan, Mike; Henley, William E.; Pearson, Ewan R.; Hattersley, Andrew T.; Shields, Beverley M.

    2016-01-01

    Aims Baseline HbA1c is a major predictor of response to glucose lowering therapy and therefore a potential confounder in studies aiming to identify other predictors. However, baseline adjustment may introduce error if the association between baseline HbA1c and response is substantially due to measurement error and regression to the mean. We aimed to determine whether studies of predictors of response should adjust for baseline HbA1c. Methods We assessed the relationship between baseline HbA1c and glycaemic response in 257 participants treated with GLP-1R agonists and assessed whether it reflected measurement error and regression to the mean using duplicate ‘pre-baseline’ HbA1c measurements not included in the response variable. In this cohort and an additional 2659 participants treated with sulfonylureas we assessed the relationship between covariates associated with baseline HbA1c and treatment response with and without baseline adjustment, and with a bias correction using pre-baseline HbA1c to adjust for the effects of error in baseline HbA1c. Results Baseline HbA1c was a major predictor of response (R2 = 0.19,β = -0.44,p<0.001).The association between pre-baseline and response was similar suggesting the greater response at higher baseline HbA1cs is not mainly due to measurement error and subsequent regression to the mean. In unadjusted analysis in both cohorts, factors associated with baseline HbA1c were associated with response, however these associations were weak or absent after adjustment for baseline HbA1c. Bias correction did not substantially alter associations. Conclusions Adjustment for the baseline HbA1c measurement is a simple and effective way to reduce bias in studies of predictors of response to glucose lowering therapy. PMID:27050911

  2. An analytic technique for statistically modeling random atomic clock errors in estimation

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1981-01-01

    Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.

  3. Troposphere Delay Raytracing Applied in VLBI Analysis

    NASA Astrophysics Data System (ADS)

    Eriksson, David; MacMillan, Daniel; Gipson, John

    2014-12-01

    Tropospheric delay modeling error is one of the largest sources of error in VLBI analysis. For standard operational solutions, we use the VMF1 elevation-dependent mapping functions derived from European Centre for Medium Range Forecasting (ECMWF) data. These mapping functions assume that tropospheric delay at a site is azimuthally symmetric. As this assumption does not reflect reality, we have instead determined the raytrace delay along the signal path through the three-dimensional troposphere refractivity field for each VLBI quasar observation. We calculated the troposphere refractivity fields from the pressure, temperature, specific humidity, and geopotential height fields of the NASA GSFC GEOS-5 numerical weather model. We discuss results using raytrace delay in the analysis of the CONT11 R&D sessions. When applied in VLBI analysis, baseline length repeatabilities were better for 70% of baselines with raytraced delays than with VMF1 mapping functions. Vertical repeatabilities were better for 2/3 of all stations. The reference frame scale bias error was 0.02 ppb for raytracing versus 0.08 ppb and 0.06 ppb for VMF1 and NMF, respectively.

  4. Differences in the effects of school meals on children's cognitive performance according to gender, household education and baseline reading skills.

    PubMed

    Sørensen, L B; Damsgaard, C T; Petersen, R A; Dalskov, S-M; Hjorth, M F; Dyssegaard, C B; Egelund, N; Tetens, I; Astrup, A; Lauritzen, L; Michaelsen, K F

    2016-10-01

    We previously found that the OPUS School Meal Study improved reading and increased errors related to inattention and impulsivity. This study explored whether the cognitive effects differed according to gender, household education and reading proficiency at baseline. This is a cluster-randomised cross-over trial comparing Nordic school meals with packed lunch from home (control) for 3 months each among 834 children aged 8 to 11 years. At baseline and at the end of each dietary period, we assessed children's performance in reading, mathematics and the d2-test of attention. Interactions were evaluated using mixed models. Analyses included 739 children. At baseline, boys and children from households without academic education were poorer readers and had a higher d2-error%. Effects on dietary intake were similar in subgroups. However, the effect of the intervention on test outcomes was stronger in boys, in children from households with academic education and in children with normal/good baseline reading proficiency. Overall, this resulted in increased socioeconomic inequality in reading performance and reduced inequality in impulsivity. Contrary to this, the gender difference decreased in reading and increased in impulsivity. Finally, the gap between poor and normal/good readers was increased in reading and decreased for d2-error%. The effects of healthy school meals on reading, impulsivity and inattention were modified by gender, household education and baseline reading proficiency. The differential effects might be related to environmental aspects of the intervention and deserves to be investigated further in future school meal trials.

  5. Benefit of Modeling the Observation Error in a Data Assimilation Framework Using Vegetation Information Obtained From Passive Based Microwave Data

    NASA Technical Reports Server (NTRS)

    Bolten, John D.; Mladenova, Iliana E.; Crow, Wade; De Jeu, Richard

    2016-01-01

    A primary operational goal of the United States Department of Agriculture (USDA) is to improve foreign market access for U.S. agricultural products. A large fraction of this crop condition assessment is based on satellite imagery and ground data analysis. The baseline soil moisture estimates that are currently used for this analysis are based on output from the modified Palmer two-layer soil moisture model, updated to assimilate near-real time observations derived from the Soil Moisture Ocean Salinity (SMOS) satellite. The current data assimilation system is based on a 1-D Ensemble Kalman Filter approach, where the observation error is modeled as a function of vegetation density. This allows for offsetting errors in the soil moisture retrievals. The observation error is currently adjusted using Normalized Difference Vegetation Index (NDVI) climatology. In this paper we explore the possibility of utilizing microwave-based vegetation optical depth instead.

  6. Longitudinal predictive ability of mapping models: examining post-intervention EQ-5D utilities derived from baseline MHAQ data in rheumatoid arthritis patients.

    PubMed

    Kontodimopoulos, Nick; Bozios, Panagiotis; Yfantopoulos, John; Niakas, Dimitris

    2013-04-01

    The purpose of this methodological study was to to provide insight into the under-addressed issue of the longitudinal predictive ability of mapping models. Post-intervention predicted and reported utilities were compared, and the effect of disease severity on the observed differences was examined. A cohort of 120 rheumatoid arthritis (RA) patients (60.0% female, mean age 59.0) embarking on therapy with biological agents completed the Modified Health Assessment Questionnaire (MHAQ) and the EQ-5D at baseline, and at 3, 6 and 12 months post-intervention. OLS regression produced a mapping equation to estimate post-intervention EQ-5D utilities from baseline MHAQ data. Predicted and reported utilities were compared with t test, and the prediction error was modeled, using fixed effects, in terms of covariates such as age, gender, time, disease duration, treatment, RF, DAS28 score, predicted and reported EQ-5D. The OLS model (RMSE = 0.207, R(2) = 45.2%) consistently underestimated future utilities, with a mean prediction error of 6.5%. Mean absolute differences between reported and predicted EQ-5D utilities at 3, 6 and 12 months exceeded the typically reported MID of the EQ-5D (0.03). According to the fixed-effects model, time, lower predicted EQ-5D and higher DAS28 scores had a significant impact on prediction errors, which appeared increasingly negative for lower reported EQ-5D scores, i.e., predicted utilities tended to be lower than reported ones in more severe health states. This study builds upon existing research having demonstrated the potential usefulness of mapping disease-specific instruments onto utility measures. The specific issue of longitudinal validity is addressed, as mapping models derived from baseline patients need to be validated on post-therapy samples. The underestimation of post-treatment utilities in the present study, at least in more severe patients, warrants further research before it is prudent to conduct cost-utility analyses in the context of RA by means of the MHAQ alone.

  7. Tropospheric Delay Raytracing Applied in VLBI Analysis

    NASA Astrophysics Data System (ADS)

    MacMillan, D. S.; Eriksson, D.; Gipson, J. M.

    2013-12-01

    Tropospheric delay modeling error continues to be one of the largest sources of error in VLBI analysis. For standard operational solutions, we use the VMF1 elevation-dependent mapping functions derived from ECMWF data. These mapping functions assume that tropospheric delay at a site is azimuthally symmetric. As this assumption does not reflect reality, we have determined the raytrace delay along the signal path through the troposphere for each VLBI quasar observation. We determined the troposphere refractivity fields from the pressure, temperature, specific humidity and geopotential height fields of the NASA GSFC GEOS-5 numerical weather model. We discuss results from analysis of the CONT11 R&D and the weekly operational R1+R4 experiment sessions. When applied in VLBI analysis, baseline length repeatabilities were better for 66-72% of baselines with raytraced delays than with VMF1 mapping functions. Vertical repeatabilities were better for 65% of sites.

  8. An improved empirical model for diversity gain on Earth-space propagation paths

    NASA Technical Reports Server (NTRS)

    Hodge, D. B.

    1981-01-01

    An empirical model was generated to estimate diversity gain on Earth-space propagation paths as a function of Earth terminal separation distance, link frequency, elevation angle, and angle between the baseline and the path azimuth. The resulting model reproduces the entire experimental data set with an RMS error of 0.73 dB.

  9. A Conjoint Analysis Framework for Evaluating User Preferences in Machine Translation

    PubMed Central

    Kirchhoff, Katrin; Capurro, Daniel; Turner, Anne M.

    2013-01-01

    Despite much research on machine translation (MT) evaluation, there is surprisingly little work that directly measures users’ intuitive or emotional preferences regarding different types of MT errors. However, the elicitation and modeling of user preferences is an important prerequisite for research on user adaptation and customization of MT engines. In this paper we explore the use of conjoint analysis as a formal quantitative framework to assess users’ relative preferences for different types of translation errors. We apply our approach to the analysis of MT output from translating public health documents from English into Spanish. Our results indicate that word order errors are clearly the most dispreferred error type, followed by word sense, morphological, and function word errors. The conjoint analysis-based model is able to predict user preferences more accurately than a baseline model that chooses the translation with the fewest errors overall. Additionally we analyze the effect of using a crowd-sourced respondent population versus a sample of domain experts and observe that main preference effects are remarkably stable across the two samples. PMID:24683295

  10. Predicting cognitive function from clinical measures of physical function and health status in older adults.

    PubMed

    Bolandzadeh, Niousha; Kording, Konrad; Salowitz, Nicole; Davis, Jennifer C; Hsu, Liang; Chan, Alison; Sharma, Devika; Blohm, Gunnar; Liu-Ambrose, Teresa

    2015-01-01

    Current research suggests that the neuropathology of dementia-including brain changes leading to memory impairment and cognitive decline-is evident years before the onset of this disease. Older adults with cognitive decline have reduced functional independence and quality of life, and are at greater risk for developing dementia. Therefore, identifying biomarkers that can be easily assessed within the clinical setting and predict cognitive decline is important. Early recognition of cognitive decline could promote timely implementation of preventive strategies. We included 89 community-dwelling adults aged 70 years and older in our study, and collected 32 measures of physical function, health status and cognitive function at baseline. We utilized an L1-L2 regularized regression model (elastic net) to identify which of the 32 baseline measures were strongly predictive of cognitive function after one year. We built three linear regression models: 1) based on baseline cognitive function, 2) based on variables consistently selected in every cross-validation loop, and 3) a full model based on all the 32 variables. Each of these models was carefully tested with nested cross-validation. Our model with the six variables consistently selected in every cross-validation loop had a mean squared prediction error of 7.47. This number was smaller than that of the full model (115.33) and the model with baseline cognitive function (7.98). Our model explained 47% of the variance in cognitive function after one year. We built a parsimonious model based on a selected set of six physical function and health status measures strongly predictive of cognitive function after one year. In addition to reducing the complexity of the model without changing the model significantly, our model with the top variables improved the mean prediction error and R-squared. These six physical function and health status measures can be easily implemented in a clinical setting.

  11. Reconsidering Cluster Bias in Multilevel Data: A Monte Carlo Comparison of Free and Constrained Baseline Approaches.

    PubMed

    Guenole, Nigel

    2018-01-01

    The test for item level cluster bias examines the improvement in model fit that results from freeing an item's between level residual variance from a baseline model with equal within and between level factor loadings and between level residual variances fixed at zero. A potential problem is that this approach may include a misspecified unrestricted model if any non-invariance is present, but the log-likelihood difference test requires that the unrestricted model is correctly specified. A free baseline approach where the unrestricted model includes only the restrictions needed for model identification should lead to better decision accuracy, but no studies have examined this yet. We ran a Monte Carlo study to investigate this issue. When the referent item is unbiased, compared to the free baseline approach, the constrained baseline approach led to similar true positive (power) rates but much higher false positive (Type I error) rates. The free baseline approach should be preferred when the referent indicator is unbiased. When the referent assumption is violated, the false positive rate was unacceptably high for both free and constrained baseline approaches, and the true positive rate was poor regardless of whether the free or constrained baseline approach was used. Neither the free or constrained baseline approach can be recommended when the referent indicator is biased. We recommend paying close attention to ensuring the referent indicator is unbiased in tests of cluster bias. All Mplus input and output files, R, and short Python scripts used to execute this simulation study are uploaded to an open access repository.

  12. Reconsidering Cluster Bias in Multilevel Data: A Monte Carlo Comparison of Free and Constrained Baseline Approaches

    PubMed Central

    Guenole, Nigel

    2018-01-01

    The test for item level cluster bias examines the improvement in model fit that results from freeing an item's between level residual variance from a baseline model with equal within and between level factor loadings and between level residual variances fixed at zero. A potential problem is that this approach may include a misspecified unrestricted model if any non-invariance is present, but the log-likelihood difference test requires that the unrestricted model is correctly specified. A free baseline approach where the unrestricted model includes only the restrictions needed for model identification should lead to better decision accuracy, but no studies have examined this yet. We ran a Monte Carlo study to investigate this issue. When the referent item is unbiased, compared to the free baseline approach, the constrained baseline approach led to similar true positive (power) rates but much higher false positive (Type I error) rates. The free baseline approach should be preferred when the referent indicator is unbiased. When the referent assumption is violated, the false positive rate was unacceptably high for both free and constrained baseline approaches, and the true positive rate was poor regardless of whether the free or constrained baseline approach was used. Neither the free or constrained baseline approach can be recommended when the referent indicator is biased. We recommend paying close attention to ensuring the referent indicator is unbiased in tests of cluster bias. All Mplus input and output files, R, and short Python scripts used to execute this simulation study are uploaded to an open access repository. PMID:29551985

  13. Longitudinal decline of driving safety in Parkinson disease.

    PubMed

    Uc, Ergun Y; Rizzo, Matthew; O'Shea, Amy M J; Anderson, Steven W; Dawson, Jeffrey D

    2017-11-07

    To longitudinally assess and predict on-road driving safety in Parkinson disease (PD). Drivers with PD (n = 67) and healthy controls (n = 110) drove a standardized route in an instrumented vehicle and were invited to return 2 years later. A professional driving expert reviewed drive data and videos to score safety errors. At baseline, drivers with PD performed worse on visual, cognitive, and motor tests, and committed more road safety errors compared to controls (median PD 38.0 vs controls 30.5; p < 0.001). A smaller proportion of drivers with PD returned for repeat testing (42.8% vs 62.7%; p < 0.01). At baseline, returnees with PD made fewer errors than nonreturnees with PD (median 34.5 vs 40.0; p < 0.05) and performed similar to control returnees (median 33). Baseline global cognitive performance of returnees with PD was better than that of nonreturnees with PD, but worse than for control returnees ( p < 0.05). After 2 years, returnees with PD showed greater cognitive decline and larger increase in error counts than control returnees (median increase PD 13.5 vs controls 3.0; p < 0.001). Driving error count increase in the returnees with PD was predicted by greater error count and worse visual acuity at baseline, and by greater interval worsening of global cognition, Unified Parkinson's Disease Rating Scale activities of daily living score, executive functions, visual processing speed, and attention. Despite drop out of the more impaired drivers within the PD cohort, returning drivers with PD, who drove like controls without PD at baseline, showed many more driving safety errors than controls after 2 years. Driving decline in PD was predicted by baseline driving performance and deterioration of cognitive, visual, and functional abnormalities on follow-up. © 2017 American Academy of Neurology.

  14. Utilization of satellite-satellite tracking data for determination of the geocentric gravitational constant (GM)

    NASA Technical Reports Server (NTRS)

    Martin, C. F.; Oh, I. H.

    1979-01-01

    Range rate tracking of GEOS 3 through the ATS 6 satellite was used, along with ground tracking of GEOS 3, to estimate the geocentric gravitational constant (GM). Using multiple half day arcs, a GM of 398600.52 + or - 0.12 cu km/sq sec was estimated using the GEM 10 gravity model, based on speed of light of 299792.458 km/sec. Tracking station coordinates were simultaneously adjusted, leaving geopotential model error as the dominant error source. Baselines between the adjusted NASA laser sites show better than 15 cm agreement with multiple short arc GEOS 3 solutions.

  15. Establishment of a rotor model basis

    NASA Technical Reports Server (NTRS)

    Mcfarland, R. E.

    1982-01-01

    Radial-dimension computations in the RSRA's blade-element model are modified for both the acquisition of extensive baseline data and for real-time simulation use. The baseline data, which are for the evaluation of model changes, use very small increments and are of high quality. The modifications to the real-time simulation model are for accuracy improvement, especially when a minimal number of blade segments is required for real-time synchronization. An accurate technique for handling tip loss in discrete blade models is developed. The mathematical consistency and convergence properties of summation algorithms for blade forces and moments are examined and generalized integration coefficients are applied to equal-annuli midpoint spacing. Rotor conditions identified as 'constrained' and 'balanced' are used and the propagation of error is analyzed.

  16. Estimating error statistics for Chambon-la-Forêt observatory definitive data

    NASA Astrophysics Data System (ADS)

    Lesur, Vincent; Heumez, Benoît; Telali, Abdelkader; Lalanne, Xavier; Soloviev, Anatoly

    2017-08-01

    We propose a new algorithm for calibrating definitive observatory data with the goal of providing users with estimates of the data error standard deviations (SDs). The algorithm has been implemented and tested using Chambon-la-Forêt observatory (CLF) data. The calibration process uses all available data. It is set as a large, weakly non-linear, inverse problem that ultimately provides estimates of baseline values in three orthogonal directions, together with their expected standard deviations. For this inverse problem, absolute data error statistics are estimated from two series of absolute measurements made within a day. Similarly, variometer data error statistics are derived by comparing variometer data time series between different pairs of instruments over few years. The comparisons of these time series led us to use an autoregressive process of order 1 (AR1 process) as a prior for the baselines. Therefore the obtained baselines do not vary smoothly in time. They have relatively small SDs, well below 300 pT when absolute data are recorded twice a week - i.e. within the daily to weekly measures recommended by INTERMAGNET. The algorithm was tested against the process traditionally used to derive baselines at CLF observatory, suggesting that statistics are less favourable when this latter process is used. Finally, two sets of definitive data were calibrated using the new algorithm. Their comparison shows that the definitive data SDs are less than 400 pT and may be slightly overestimated by our process: an indication that more work is required to have proper estimates of absolute data error statistics. For magnetic field modelling, the results show that even on isolated sites like CLF observatory, there are very localised signals over a large span of temporal frequencies that can be as large as 1 nT. The SDs reported here encompass signals of a few hundred metres and less than a day wavelengths.

  17. Baseline estimation in flame's spectra by using neural networks and robust statistics

    NASA Astrophysics Data System (ADS)

    Garces, Hugo; Arias, Luis; Rojas, Alejandro

    2014-09-01

    This work presents a baseline estimation method in flame spectra based on artificial intelligence structure as a neural network, combining robust statistics with multivariate analysis to automatically discriminate measured wavelengths belonging to continuous feature for model adaptation, surpassing restriction of measuring target baseline for training. The main contributions of this paper are: to analyze a flame spectra database computing Jolliffe statistics from Principal Components Analysis detecting wavelengths not correlated with most of the measured data corresponding to baseline; to systematically determine the optimal number of neurons in hidden layers based on Akaike's Final Prediction Error; to estimate baseline in full wavelength range sampling measured spectra; and to train an artificial intelligence structure as a Neural Network which allows to generalize the relation between measured and baseline spectra. The main application of our research is to compute total radiation with baseline information, allowing to diagnose combustion process state for optimization in early stages.

  18. Effect of wet tropospheric path delays on estimation of geodetic baselines in the Gulf of California using the Global Positioning System

    NASA Technical Reports Server (NTRS)

    Tralli, David M.; Dixon, Timothy H.; Stephens, Scott A.

    1988-01-01

    Surface Meteorological (SM) and Water Vapor Radiometer (WVR) measurements are used to provide an independent means of calibrating the GPS signal for the wet tropospheric path delay in a study of geodetic baseline measurements in the Gulf of California using GPS in which high tropospheric water vapor content yielded wet path delays in excess of 20 cm at zenith. Residual wet delays at zenith are estimated as constants and as first-order exponentially correlated stochastic processes. Calibration with WVR data is found to yield the best repeatabilities, with improved results possible if combined carrier phase and pseudorange data are used. Although SM measurements can introduce significant errors in baseline solutions if used with a simple atmospheric model and estimation of residual zenith delays as constants, SM calibration and stochastic estimation for residual zenith wet delays may be adequate for precise estimation of GPS baselines. For dry locations, WVRs may not be required to accurately model tropospheric effects on GPS baselines.

  19. Real-Time Minimization of Tracking Error for Aircraft Systems

    NASA Technical Reports Server (NTRS)

    Garud, Sumedha; Kaneshige, John T.; Krishnakumar, Kalmanje S.; Kulkarni, Nilesh V.; Burken, John

    2013-01-01

    This technology presents a novel, stable, discrete-time adaptive law for flight control in a Direct adaptive control (DAC) framework. Where errors are not present, the original control design has been tuned for optimal performance. Adaptive control works towards achieving nominal performance whenever the design has modeling uncertainties/errors or when the vehicle suffers substantial flight configuration change. The baseline controller uses dynamic inversion with proportional-integral augmentation. On-line adaptation of this control law is achieved by providing a parameterized augmentation signal to a dynamic inversion block. The parameters of this augmentation signal are updated to achieve the nominal desired error dynamics. If the system senses that at least one aircraft component is experiencing an excursion and the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, then the neural network (NN) modeling of aircraft operation may be changed.

  20. The Crucial Role of Error Correlation for Uncertainty Modeling of CFD-Based Aerodynamics Increments

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.; Walker, Eric L.

    2011-01-01

    The Ares I ascent aerodynamics database for Design Cycle 3 (DAC-3) was built from wind-tunnel test results and CFD solutions. The wind tunnel results were used to build the baseline response surfaces for wind-tunnel Reynolds numbers at power-off conditions. The CFD solutions were used to build increments to account for Reynolds number effects. We calculate the validation errors for the primary CFD code results at wind tunnel Reynolds number power-off conditions and would like to be able to use those errors to predict the validation errors for the CFD increments. However, the validation errors are large compared to the increments. We suggest a way forward that is consistent with common practice in wind tunnel testing which is to assume that systematic errors in the measurement process and/or the environment will subtract out when increments are calculated, thus making increments more reliable with smaller uncertainty than absolute values of the aerodynamic coefficients. A similar practice has arisen for the use of CFD to generate aerodynamic database increments. The basis of this practice is the assumption of strong correlation of the systematic errors inherent in each of the results used to generate an increment. The assumption of strong correlation is the inferential link between the observed validation uncertainties at wind-tunnel Reynolds numbers and the uncertainties to be predicted for flight. In this paper, we suggest a way to estimate the correlation coefficient and demonstrate the approach using code-to-code differences that were obtained for quality control purposes during the Ares I CFD campaign. Finally, since we can expect the increments to be relatively small compared to the baseline response surface and to be typically of the order of the baseline uncertainty, we find that it is necessary to be able to show that the correlation coefficients are close to unity to avoid overinflating the overall database uncertainty with the addition of the increments.

  1. Evaluation of very long baseline interferometry atmospheric modeling improvements

    NASA Technical Reports Server (NTRS)

    Macmillan, D. S.; Ma, C.

    1994-01-01

    We determine the improvement in baseline length precision and accuracy using new atmospheric delay mapping functions and MTT by analyzing the NASA Crustal Dynamics Project research and development (R&D) experiments and the International Radio Interferometric Surveying (IRIS) A experiments. These mapping functions reduce baseline length scatter by about 20% below that using the CfA2.2 dry and Chao wet mapping functions. With the newer mapping functions, average station vertical scatter inferred from observed length precision (given by length repeatabilites) is 11.4 mm for the 1987-1990 monthly R&D series of experiments and 5.6 mm for the 3-week-long extended research and development experiment (ERDE) series. The inferred monthly R&D station vertical scatter is reduced by 2 mm or by 7 mm is a root-sum-square (rss) sense. Length repeatabilities are optimum when observations below a 7-8 deg elevation cutoff are removed from the geodetic solution. Analyses of IRIS-A data from 1984 through 1991 and the monthly R&D experiments both yielded a nonatmospheric unmodeled station vertical error or about 8 mm. In addition, analysis of the IRIS-A exeriments revealed systematic effects in the evolution of some baseline length measurements. The length rate of change has an apparent acceleration, and the length evolution has a quasi-annual signature. We show that the origin of these effects is unlikely to be related to atmospheric modeling errors. Rates of change of the transatlantic Westford-Wettzell and Richmond-Wettzell baseline lengths calculated from 1988 through 1991 agree with the NUVEL-1 plate motion model (Argus and Gordon, 1991) to within 1 mm/yr. Short-term (less than 90 days) variations of IRIS-A baseline length measurements contribute more than 90% of the observed scatter about a best fit line, and this short-term scatter has large variations on an annual time scale.

  2. Current Status of the Development of a Transportable and Compact VLBI System by NICT and GSI

    NASA Technical Reports Server (NTRS)

    Ishii, Atsutoshi; Ichikawa, Ryuichi; Takiguchi, Hiroshi; Takefuji, Kazuhiro; Ujihara, Hideki; Koyama, Yasuhiro; Kondo, Tetsuro; Kurihara, Shinobu; Miura, Yuji; Matsuzaka, Shigeru; hide

    2010-01-01

    MARBLE (Multiple Antenna Radio-interferometer for Baseline Length Evaluation) is under development by NICT and GSI. The main part of MARBLE is a transportable VLBI system with a compact antenna. The aim of this system is to provide precise baseline length over about 10 km for calibrating baselines. The calibration baselines are used to check and validate surveying instruments such as GPS receiver and EDM (Electro-optical Distance Meter). It is necessary to examine the calibration baselines regularly to keep the quality of the validation. The VLBI technique can examine and evaluate the calibration baselines. On the other hand, the following roles are expected of a compact VLBI antenna in the VLBI2010 project. In order to achieve the challenging measurement precision of VLBI2010, it is well known that it is necessary to deal with the problem of thermal and gravitational deformation of the antenna. One promising approach may be connected-element interferometry between a compact antenna and a VLBI2010 antenna. By measuring repeatedly the baseline between the small stable antenna and the VLBI2010 antenna, the deformation of the primary antenna can be measured and the thermal and gravitational models of the primary antenna will be able to be constructed. We made two prototypes of a transportable and compact VLBI system from 2007 to 2009. We performed VLBI experiments using theses prototypes and got a baseline length between the two prototypes. The formal error of the measured baseline length was 2.7 mm. We expect that the baseline length error will be reduced by using a high-speed A/D sampler.

  3. Virtual tape measure for the operating microscope: system specifications and performance evaluation.

    PubMed

    Kim, M Y; Drake, J M; Milgram, P

    2000-01-01

    The Virtual Tape Measure for the Operating Microscope (VTMOM) was created to assist surgeons in making accurate 3D measurements of anatomical structures seen in the surgical field under the operating microscope. The VTMOM employs augmented reality techniques by combining stereoscopic video images with stereoscopic computer graphics, and functions by relying on an operator's ability to align a 3D graphic pointer, which serves as the end-point of the virtual tape measure, with designated locations on the anatomical structure being measured. The VTMOM was evaluated for its baseline and application performances as well as its application efficacy. Baseline performance was determined by measuring the mean error (bias) and standard deviation of error (imprecision) in measurements of non-anatomical objects. Application performance was determined by comparing the error in measuring the dimensions of aneurysm models with and without the VTMOM. Application efficacy was determined by comparing the error in selecting the appropriate aneurysm clip size with and without the VTMOM. Baseline performance indicated a bias of 0.3 mm and an imprecision of 0.6 mm. Application bias was 3.8 mm and imprecision was 2.8 mm for aneurysm diameter. The VTMOM did not improve aneurysm clip size selection accuracy. The VTMOM is a potentially accurate tool for use under the operating microscope. However, its performance when measuring anatomical objects is highly dependent on complex visual features of the object surfaces. Copyright 2000 Wiley-Liss, Inc.

  4. Satellite-based Calibration of Heat Flux at the Ocean Surface

    NASA Astrophysics Data System (ADS)

    Barron, C. N.; Dastugue, J. M.; May, J. C.; Rowley, C. D.; Smith, S. R.; Spence, P. L.; Gremes-Cordero, S.

    2016-02-01

    Model forecasts of upper ocean heat content and variability on diurnal to daily scales are highly dependent on estimates of heat flux through the air-sea interface. Satellite remote sensing is applied to not only inform the initial ocean state but also to mitigate errors in surface heat flux and model representations affecting the distribution of heat in the upper ocean. Traditional assimilation of sea surface temperature (SST) observations re-centers ocean models at the start of each forecast cycle. Subsequent evolution depends on estimates of surface heat fluxes and upper-ocean processes over the forecast period. The COFFEE project (Calibration of Ocean Forcing with satellite Flux Estimates) endeavors to correct ocean forecast bias through a responsive error partition among surface heat flux and ocean dynamics sources. A suite of experiments in the southern California Current demonstrates a range of COFFEE capabilities, showing the impact on forecast error relative to a baseline three-dimensional variational (3DVAR) assimilation using Navy operational global or regional atmospheric forcing. COFFEE addresses satellite-calibration of surface fluxes to estimate surface error covariances and links these to the ocean interior. Experiment cases combine different levels of flux calibration with different assimilation alternatives. The cases may use the original fluxes, apply full satellite corrections during the forecast period, or extend hindcast corrections into the forecast period. Assimilation is either baseline 3DVAR or standard strong-constraint 4DVAR, with work proceeding to add a 4DVAR expanded to include a weak constraint treatment of the surface flux errors. Covariance of flux errors is estimated from the recent time series of forecast and calibrated flux terms. While the California Current examples are shown, the approach is equally applicable to other regions. These approaches within a 3DVAR application are anticipated to be useful for global and larger regional domains where a full 4DVAR methodology may be cost-prohibitive.

  5. Smooth extrapolation of unknown anatomy via statistical shape models

    NASA Astrophysics Data System (ADS)

    Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.

    2015-03-01

    Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.

  6. Effect of horizontal displacements due to ocean tide loading on the determination of polar motion and UT1

    NASA Astrophysics Data System (ADS)

    Scherneck, Hans-Georg; Haas, Rüdiger

    We show the influence of horizontal displacements due to ocean tide loading on the determination of polar motion and UT1 (PMU) on the daily and subdaily timescale. So called ‘virtual PMU variations’ due to modelling errors of ocean tide loading are predicted for geodetic Very Long Baseline Interferometry (VLBI) networks. This leads to errors of subdaily determination of PMU. The predicted effects are confirmed by the analysis of geodetic VLBI observations.

  7. Calibration of Ocean Forcing with satellite Flux Estimates (COFFEE)

    NASA Astrophysics Data System (ADS)

    Barron, Charlie; Jan, Dastugue; Jackie, May; Rowley, Clark; Smith, Scott; Spence, Peter; Gremes-Cordero, Silvia

    2016-04-01

    Predicting the evolution of ocean temperature in regional ocean models depends on estimates of surface heat fluxes and upper-ocean processes over the forecast period. Within the COFFEE project (Calibration of Ocean Forcing with satellite Flux Estimates, real-time satellite observations are used to estimate shortwave, longwave, sensible, and latent air-sea heat flux corrections to a background estimate from the prior day's regional or global model forecast. These satellite-corrected fluxes are used to prepare a corrected ocean hindcast and to estimate flux error covariances to project the heat flux corrections for a 3-5 day forecast. In this way, satellite remote sensing is applied to not only inform the initial ocean state but also to mitigate errors in surface heat flux and model representations affecting the distribution of heat in the upper ocean. While traditional assimilation of sea surface temperature (SST) observations re-centers ocean models at the start of each forecast cycle, COFFEE endeavors to appropriately partition and reduce among various surface heat flux and ocean dynamics sources. A suite of experiments in the southern California Current demonstrates a range of COFFEE capabilities, showing the impact on forecast error relative to a baseline three-dimensional variational (3DVAR) assimilation using operational global or regional atmospheric forcing. Experiment cases combine different levels of flux calibration with assimilation alternatives. The cases use the original fluxes, apply full satellite corrections during the forecast period, or extend hindcast corrections into the forecast period. Assimilation is either baseline 3DVAR or standard strong-constraint 4DVAR, with work proceeding to add a 4DVAR expanded to include a weak constraint treatment of the surface flux errors. Covariance of flux errors is estimated from the recent time series of forecast and calibrated flux terms. While the California Current examples are shown, the approach is equally applicable to other regions. These approaches within a 3DVAR application are anticipated to be useful for global and larger regional domains where a full 4DVAR methodology may be cost-prohibitive.

  8. Accuracy of computerized automatic identification of cephalometric landmarks by a designed software.

    PubMed

    Shahidi, Sh; Shahidi, S; Oshagh, M; Gozin, F; Salehi, P; Danaei, S M

    2013-01-01

    The purpose of this study was to design software for localization of cephalometric landmarks and to evaluate its accuracy in finding landmarks. 40 digital cephalometric radiographs were randomly selected. 16 landmarks which were important in most cephalometric analyses were chosen to be identified. Three expert orthodontists manually identified landmarks twice. The mean of two measurements of each landmark was defined as the baseline landmark. The computer was then able to compare the automatic system's estimate of a landmark with the baseline landmark. The software was designed using Delphi and Matlab programming languages. The techniques were template matching, edge enhancement and some accessory techniques. The total mean error between manually identified and automatically identified landmarks was 2.59 mm. 12.5% of landmarks had mean errors less than 1 mm. 43.75% of landmarks had mean errors less than 2 mm. The mean errors of all landmarks except the anterior nasal spine were less than 4 mm. This software had significant accuracy for localization of cephalometric landmarks and could be used in future applications. It seems that the accuracy obtained with the software which was developed in this study is better than previous automated systems that have used model-based and knowledge-based approaches.

  9. An evaluation of water vapor radiometer data for calibration of the wet path delay in very long baseline interferometry experiments

    NASA Technical Reports Server (NTRS)

    Kuehn, C. E.; Himwich, W. E.; Clark, T. A.; Ma, C.

    1991-01-01

    The internal consistency of the baseline-length measurements derived from analysis of several independent VLBI experiments is an estimate of the measurement precision. The paper investigates whether the inclusion of water vapor radiometer (WVR) data as an absolute calibration of the propagation delay due to water vapor improves the precision of VLBI baseline-length measurements. The paper analyzes 28 International Radio Interferometric Surveying runs between June 1988 and January 1989; WVR measurements were made during each session. The addition of WVR data decreased the scatter of the length measurements of the baselines by 5-10 percent. The observed reduction in the scatter of the baseline lengths is less than what is expected from the behavior of the formal errors, which suggest that the baseline-length measurement precision should improve 10-20 percent if WVR data are included in the analysis. The discrepancy between the formal errors and the baseline-length results can be explained as the consequence of systematic errors in the dry-mapping function parameters, instrumental biases in the WVR and the barometer, or both.

  10. Atmospheric pressure loading effects on Global Positioning System coordinate determinations

    NASA Technical Reports Server (NTRS)

    Vandam, Tonie M.; Blewitt, Geoffrey; Heflin, Michael B.

    1994-01-01

    Earth deformation signals caused by atmospheric pressure loading are detected in vertical position estimates at Global Positioning System (GPS) stations. Surface displacements due to changes in atmospheric pressure account for up to 24% of the total variance in the GPS height estimates. The detected loading signals are larger at higher latitudes where pressure variations are greatest; the largest effect is observed at Fairbanks, Alaska (latitude 65 deg), with a signal root mean square (RMS) of 5 mm. Out of 19 continuously operating GPS sites (with a mean of 281 daily solutions per site), 18 show a positive correlation between the GPS vertical estimates and the modeled loading displacements. Accounting for loading reduces the variance of the vertical station positions on 12 of the 19 sites investigated. Removing the modeled pressure loading from GPS determinations of baseline length for baselines longer than 6000 km reduces the variance on 73 of the 117 baselines investigated. The slight increase in variance for some of the sites and baselines is consistent with expected statistical fluctuations. The results from most stations are consistent with approximately 65% of the modeled pressure load being found in the GPS vertical position measurements. Removing an annual signal from both the measured heights and the modeled load time series leaves this value unchanged. The source of the remaining discrepancy between the modeled and observed loading signal may be the result of (1) anisotropic effects in the Earth's loading response, (2) errors in GPS estimates of tropospheric delay, (3) errors in the surface pressure data, or (4) annual signals in the time series of loading and station heights. In addition, we find that using site dependent coefficients, determined by fitting local pressure to the modeled radial displacements, reduces the variance of the measured station heights as well as or better than using the global convolution sum.

  11. Accounting for misclassification error in retrospective smoking data.

    PubMed

    Kenkel, Donald S; Lillard, Dean R; Mathios, Alan D

    2004-10-01

    Recent waves of major longitudinal surveys in the US and other countries include retrospective questions about the timing of smoking initiation and cessation, creating a potentially important but under-utilized source of information on smoking behavior over the life course. In this paper, we explore the extent of, consequences of, and possible solutions to misclassification errors in models of smoking participation that use data generated from retrospective reports. In our empirical work, we exploit the fact that the National Longitudinal Survey of Youth 1979 provides both contemporaneous and retrospective information about smoking status in certain years. We compare the results from four sets of models of smoking participation. The first set of results are from baseline probit models of smoking participation from contemporaneously reported information. The second set of results are from models that are identical except that the dependent variable is based on retrospective information. The last two sets of results are from models that take a parametric approach to account for a simple form of misclassification error. Our preliminary results suggest that accounting for misclassification error is important. However, the adjusted maximum likelihood estimation approach to account for misclassification does not always perform as expected. Copyright 2004 John Wiley & Sons, Ltd.

  12. Zero tolerance prescribing: a strategy to reduce prescribing errors on the paediatric intensive care unit.

    PubMed

    Booth, Rachelle; Sturgess, Emma; Taberner-Stokes, Alison; Peters, Mark

    2012-11-01

    To establish the baseline prescribing error rate in a tertiary paediatric intensive care unit (PICU) and to determine the impact of a zero tolerance prescribing (ZTP) policy incorporating a dedicated prescribing area and daily feedback of prescribing errors. A prospective, non-blinded, observational study was undertaken in a 12-bed tertiary PICU over a period of 134 weeks. Baseline prescribing error data were collected on weekdays for all patients for a period of 32 weeks, following which the ZTP policy was introduced. Daily error feedback was introduced after a further 12 months. Errors were sub-classified as 'clinical', 'non-clinical' and 'infusion prescription' errors and the effects of interventions considered separately. The baseline combined prescribing error rate was 892 (95 % confidence interval (CI) 765-1,019) errors per 1,000 PICU occupied bed days (OBDs), comprising 25.6 % clinical, 44 % non-clinical and 30.4 % infusion prescription errors. The combined interventions of ZTP plus daily error feedback were associated with a reduction in the combined prescribing error rate to 447 (95 % CI 389-504) errors per 1,000 OBDs (p < 0.0001), an absolute risk reduction of 44.5 % (95 % CI 40.8-48.0 %). Introduction of the ZTP policy was associated with a significant decrease in clinical and infusion prescription errors, while the introduction of daily error feedback was associated with a significant reduction in non-clinical prescribing errors. The combined interventions of ZTP and daily error feedback were associated with a significant reduction in prescribing errors in the PICU, in line with Department of Health requirements of a 40 % reduction within 5 years.

  13. Statistical Mechanics of Node-perturbation Learning with Noisy Baseline

    NASA Astrophysics Data System (ADS)

    Hara, Kazuyuki; Katahira, Kentaro; Okada, Masato

    2017-02-01

    Node-perturbation learning is a type of statistical gradient descent algorithm that can be applied to problems where the objective function is not explicitly formulated, including reinforcement learning. It estimates the gradient of an objective function by using the change in the object function in response to the perturbation. The value of the objective function for an unperturbed output is called a baseline. Cho et al. proposed node-perturbation learning with a noisy baseline. In this paper, we report on building the statistical mechanics of Cho's model and on deriving coupled differential equations of order parameters that depict learning dynamics. We also show how to derive the generalization error by solving the differential equations of order parameters. On the basis of the results, we show that Cho's results are also apply in general cases and show some general performances of Cho's model.

  14. Continuous Glucose Monitoring in Newborn Infants

    PubMed Central

    Thomas, Felicity; Signal, Mathew; Harris, Deborah L.; Weston, Philip J.; Harding, Jane E.; Shaw, Geoffrey M.

    2014-01-01

    Neonatal hypoglycemia is common and can cause serious brain injury. Continuous glucose monitoring (CGM) could improve hypoglycemia detection, while reducing blood glucose (BG) measurements. Calibration algorithms use BG measurements to convert sensor signals into CGM data. Thus, inaccuracies in calibration BG measurements directly affect CGM values and any metrics calculated from them. The aim was to quantify the effect of timing delays and calibration BG measurement errors on hypoglycemia metrics in newborn infants. Data from 155 babies were used. Two timing and 3 BG meter error models (Abbott Optium Xceed, Roche Accu-Chek Inform II, Nova Statstrip) were created using empirical data. Monte-Carlo methods were employed, and each simulation was run 1000 times. Each set of patient data in each simulation had randomly selected timing and/or measurement error added to BG measurements before CGM data were calibrated. The number of hypoglycemic events, duration of hypoglycemia, and hypoglycemic index were then calculated using the CGM data and compared to baseline values. Timing error alone had little effect on hypoglycemia metrics, but measurement error caused substantial variation. Abbott results underreported the number of hypoglycemic events by up to 8 and Roche overreported by up to 4 where the original number reported was 2. Nova results were closest to baseline. Similar trends were observed in the other hypoglycemia metrics. Errors in blood glucose concentration measurements used for calibration of CGM devices can have a clinically important impact on detection of hypoglycemia. If CGM devices are going to be used for assessing hypoglycemia it is important to understand of the impact of these errors on CGM data. PMID:24876618

  15. Presentation Of The Small Baseline NSBAS Processing Chain On A Case Example: The ETNA Deformation Monitoring From 2003 to 2010 Using ENVISAT Data

    NASA Astrophysics Data System (ADS)

    Doin, Marie-Pierre; Lodge, Felicity; Guillaso, Stephane; Jolivet, Romain; Lasserre, Cecile; Ducret, Gabriel; Grandin, Raphael; Pathier, Erwan; Pinel, Virginie

    2012-01-01

    We assemble a processing chain that handles InSAR computation from raw data to time series analysis. A large part of the chain (from raw data to geocoded unwrapped interferograms) is based on ROI PAC modules (Rosen et al., 2004), with original routines rearranged and combined with new routines to process in series and in a common radar geometry all SAR images and interferograms. A new feature of the software is the range-dependent spectral filtering to improve coherence in interferograms with long spatial baselines. Additional components include a module to estimate and remove digital elevation model errors before unwrapping, a module to mitigate the effects of the atmospheric phase delay and remove residual orbit errors, and a module to construct the phase change time series from small baseline interferograms (Berardino et al. 2002). This paper describes the main elements of the processing chain and presents an example of application of the software using a data set from the ENVISAT mission covering the Etna volcano.

  16. Accuracy assessment of high-rate GPS measurements for seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Davis, J. L.; Ekström, G.

    2007-12-01

    Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.

  17. Significant and Sustained Reduction in Chemotherapy Errors Through Improvement Science.

    PubMed

    Weiss, Brian D; Scott, Melissa; Demmel, Kathleen; Kotagal, Uma R; Perentesis, John P; Walsh, Kathleen E

    2017-04-01

    A majority of children with cancer are now cured with highly complex chemotherapy regimens incorporating multiple drugs and demanding monitoring schedules. The risk for error is high, and errors can occur at any stage in the process, from order generation to pharmacy formulation to bedside drug administration. Our objective was to describe a program to eliminate errors in chemotherapy use among children. To increase reporting of chemotherapy errors, we supplemented the hospital reporting system with a new chemotherapy near-miss reporting system. After the model for improvement, we then implemented several interventions, including a daily chemotherapy huddle, improvements to the preparation and delivery of intravenous therapy, headphones for clinicians ordering chemotherapy, and standards for chemotherapy administration throughout the hospital. Twenty-two months into the project, we saw a centerline shift in our U chart of chemotherapy errors that reached the patient from a baseline rate of 3.8 to 1.9 per 1,000 doses. This shift has been sustained for > 4 years. In Poisson regression analyses, we found an initial increase in error rates, followed by a significant decline in errors after 16 months of improvement work ( P < .001). After the model for improvement, our improvement efforts were associated with significant reductions in chemotherapy errors that reached the patient. Key drivers for our success included error vigilance through a huddle, standardization, and minimization of interruptions during ordering.

  18. Modelling exoplanet detection with the LUVOIR Coronagraph: aberration sensitivity and error tolerances

    NASA Astrophysics Data System (ADS)

    Juanola-Parramon, Roser; Zimmerman, Neil; Bolcar, Matthew R.; Rizzo, Maxime; Roberge, Aki

    2018-01-01

    The Coronagraph is a key instrument on the Large UV-Optical-Infrared (LUVOIR) Surveyor mission concept. The Apodized Pupil Lyot Coronagraph (APLC) is one of the baselined mask technologies to enable 1E10 contrast observations in the habitable zones of nearby stars. Both the LUVOIR architectures A and B present a segmented aperture as input pupil, introducing a set of random tip/tilt and piston errors, among others, that greatly affect the performance of the coronagraph instrument by increasing the wavefront errors hence reducing the instrument sensitivity. In this poster we present the latest results of the simulation of these effects for different working angle regions and discuss the achieved contrast for exoplanet detection and characterization, including simulated observations under these circumstances, setting boundaries for the tolerance of such errors.

  19. Continuous glucose monitoring in newborn infants: how do errors in calibration measurements affect detected hypoglycemia?

    PubMed

    Thomas, Felicity; Signal, Mathew; Harris, Deborah L; Weston, Philip J; Harding, Jane E; Shaw, Geoffrey M; Chase, J Geoffrey

    2014-05-01

    Neonatal hypoglycemia is common and can cause serious brain injury. Continuous glucose monitoring (CGM) could improve hypoglycemia detection, while reducing blood glucose (BG) measurements. Calibration algorithms use BG measurements to convert sensor signals into CGM data. Thus, inaccuracies in calibration BG measurements directly affect CGM values and any metrics calculated from them. The aim was to quantify the effect of timing delays and calibration BG measurement errors on hypoglycemia metrics in newborn infants. Data from 155 babies were used. Two timing and 3 BG meter error models (Abbott Optium Xceed, Roche Accu-Chek Inform II, Nova Statstrip) were created using empirical data. Monte-Carlo methods were employed, and each simulation was run 1000 times. Each set of patient data in each simulation had randomly selected timing and/or measurement error added to BG measurements before CGM data were calibrated. The number of hypoglycemic events, duration of hypoglycemia, and hypoglycemic index were then calculated using the CGM data and compared to baseline values. Timing error alone had little effect on hypoglycemia metrics, but measurement error caused substantial variation. Abbott results underreported the number of hypoglycemic events by up to 8 and Roche overreported by up to 4 where the original number reported was 2. Nova results were closest to baseline. Similar trends were observed in the other hypoglycemia metrics. Errors in blood glucose concentration measurements used for calibration of CGM devices can have a clinically important impact on detection of hypoglycemia. If CGM devices are going to be used for assessing hypoglycemia it is important to understand of the impact of these errors on CGM data. © 2014 Diabetes Technology Society.

  20. Habitable Exoplanet Imager Optical-Mechanical Design and Analysis

    NASA Technical Reports Server (NTRS)

    Gaskins, Jonathan; Stahl, H. Philip

    2017-01-01

    The Habitable Exoplanet Imager (HabEx) is a space telescope currently in development whose mission includes finding and spectroscopically characterizing exoplanets. Effective high-contrast imaging requires tight stability requirements of the mirrors to prevent issues such as line of sight and wavefront errors. PATRAN and NASTRAN were used to model updates in the design of the HabEx telescope and find how those updates affected stability. Most of the structural modifications increased first mode frequencies and improved line of sight errors. These studies will be used to help define the baseline HabEx telescope design.

  1. Solar dynamic heat receiver thermal characteristics in low earth orbit

    NASA Technical Reports Server (NTRS)

    Wu, Y. C.; Roschke, E. J.; Birur, G. C.

    1988-01-01

    A simplified system model is under development for evaluating the thermal characteristics and thermal performance of a solar dynamic spacecraft energy system's heat receiver. Results based on baseline orbit, power system configuration, and operational conditions, are generated for three basic receiver concepts and three concentrator surface slope errors. Receiver thermal characteristics and thermal behavior in LEO conditions are presented. The configuration in which heat is directly transferred to the working fluid is noted to generate the best system and thermal characteristics. as well as the lowest performance degradation with increasing slope error.

  2. The problem of isotopic baseline: Reconstructing the diet and trophic position of fossil animals

    NASA Astrophysics Data System (ADS)

    Casey, Michelle M.; Post, David M.

    2011-05-01

    Stable isotope methods are powerful, frequently used tools which allow diet and trophic position reconstruction of organisms and the tracking of energy sources through ecosystems. The majority of ecosystems have multiple food sources which have distinct carbon and nitrogen isotopic signatures despite occupying a single trophic level. This difference in the starting isotopic composition of primary producers sets up an isotopic baseline that needs to be accounted for when calculating diet or trophic position using stable isotopic methods. This is particularly important when comparing animals from different regions or different times. Failure to do so can cause erroneous estimations of diet or trophic level, especially for organisms with mixed diets. The isotopic baseline is known to vary seasonally and in concert with a host of physical and chemical variables such as mean annual rainfall, soil maturity, and soil pH in terrestrial settings and lake size, depth, and distance from shore in aquatic settings. In the fossil record, the presence of shallowing upward suites of rock, or parasequences, will have a considerable impact on the isotopic baseline as basin size, depth and distance from shore change simultaneously with stratigraphic depth. For this reason, each stratigraphic level is likely to need an independent estimation of baseline even within a single outcrop. Very little is known about the scope of millennial or decadal variation in isotopic baseline. Without multi-year data on the nature of isotopic baseline variation, the impacts of time averaging on our ability to resolve trophic relationships in the fossil record will remain unclear. The use of a time averaged baseline will increase the amount of error surrounding diet and trophic position reconstructions. Where signal to noise ratios are low, due to low end member disparity (e.g., aquatic systems), or where the observed isotopic shift is small (≤ 1‰) the error introduced by time averaging may severely inhibit the scope of one's interpretations and limit the types of questions one can reliably answer. In situations with strong signal strength, resulting from high amounts of end member disparity (e.g., terrestrial settings), this additional error maybe surmountable. Baseline variation that is adequately characterized can be dealt with by applying multiple end-member mixing models.

  3. Analysis of Performance of Stereoscopic-Vision Software

    NASA Technical Reports Server (NTRS)

    Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert

    2007-01-01

    A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.

  4. Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data

    PubMed Central

    Zhao, Shanshan

    2014-01-01

    Summary Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling design. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469

  5. Covariate measurement error correction methods in mediation analysis with failure time data.

    PubMed

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  6. Assessment of estimated retinal atrophy progression in Stargardt macular dystrophy using spectral-domain optical coherence tomography

    PubMed Central

    Strauss, Rupert W; Muñoz, Beatriz; Wolfson, Yulia; Sophie, Raafay; Fletcher, Emily; Bittencourt, Millena G; Scholl, Hendrik P N

    2016-01-01

    Aims To estimate disease progression based on analysis of macular volume measured by spectral-domain optical coherence tomography (SD-OCT) in patients affected by Stargardt macular dystrophy (STGD1) and to evaluate the influence of software errors on these measurements. Methods 58 eyes of 29 STGD1 patients were included. Numbers and types of algorithm errors were recorded and manually corrected. In a subgroup of 36 eyes of 18 patients with at least two examinations over time, total macular volume (TMV) and volumes of all nine Early Treatment of Diabetic Retinopathy Study (ETDRS) subfields were obtained. Random effects models were used to estimate the rate of change per year for the population, and empirical Bayes slopes were used to estimate yearly decline in TMV for individual eyes. Results 6958 single B-scans from 190 macular cube scans were analysed. 2360 (33.9%) showed algorithm errors. Mean observation period for follow-up data was 15 months (range 3–40). The median (IQR) change in TMV using the empirical Bayes estimates for the individual eyes was −0.103 (−0.145, −0.059) mm3 per year. The mean (±SD) TMV was 6.321±1.000 mm3 at baseline, and rate of decline was −0.118 mm3 per year (p=0.003). Yearly mean volume change was −0.004 mm3 in the central subfield (mean baseline=0.128 mm3), −0.032 mm3 in the inner (mean baseline=1.484 mm3) and −0.079 mm3 in the outer ETDRS subfields (mean baseline=5.206 mm3). Conclusions SD-OCT measurements allow monitoring the decline in retinal volume in STGD1; however, they require significant manual correction of software errors. PMID:26568636

  7. A water-vapor radiometer error model. [for ionosphere in geodetic microwave techniques

    NASA Technical Reports Server (NTRS)

    Beckman, B.

    1985-01-01

    The water-vapor radiometer (WVR) is used to calibrate unpredictable delays in the wet component of the troposphere in geodetic microwave techniques such as very-long-baseline interferometry (VLBI) and Global Positioning System (GPS) tracking. Based on experience with Jet Propulsion Laboratory (JPL) instruments, the current level of accuracy in wet-troposphere calibration limits the accuracy of local vertical measurements to 5-10 cm. The goal for the near future is 1-3 cm. Although the WVR is currently the best calibration method, many instruments are prone to systematic error. In this paper, a treatment of WVR data is proposed and evaluated. This treatment reduces the effect of WVR systematic errors by estimating parameters that specify an assumed functional form for the error. The assumed form of the treatment is evaluated by comparing the results of two similar WVR's operating near each other. Finally, the observability of the error parameters is estimated by covariance analysis.

  8. A New Approach to Estimate Forest Parameters Using Dual-Baseline Pol-InSAR Data

    NASA Astrophysics Data System (ADS)

    Bai, L.; Hong, W.; Cao, F.; Zhou, Y.

    2009-04-01

    In POL-InSAR applications using ESPRIT technique, it is assumed that there exist stable scattering centres in the forest. However, the observations in forest severely suffer from volume and temporal decorrelation. The forest scatters are not stable as assumed. The obtained interferometric information is not accurate as expected. Besides, ESPRIT techniques could not identify the interferometric phases corresponding to the ground and the canopy. It provides multiple estimations for the height between two scattering centers due to phase unwrapping. Therefore, estimation errors are introduced to the forest height results. To suppress the two types of errors, we use the dual-baseline POL-InSAR data to estimate forest height. Dual-baseline coherence optimization is applied to obtain interferometric information of stable scattering centers in the forest. From the interferometric phases for different baselines, estimation errors caused by phase unwrapping is solved. Other estimation errors can be suppressed, too. Experiments are done to the ESAR L band POL-InSAR data. Experimental results show the proposed methods provide more accurate forest height than ESPRIT technique.

  9. Status and Prospects for Combined GPS LOD and VLBI UT1 Measurements

    NASA Astrophysics Data System (ADS)

    Senior, K.; Kouba, J.; Ray, J.

    2010-01-01

    A Kalman filter was developed to combine VLBI estimates of UT1-TAI with biased length of day (LOD) estimates from GPS. The VLBI results are the analyses of the NASA Goddard Space Flight Center group from 24-hr multi-station observing sessions several times per week and the nearly daily 1-hr single-baseline sessions. Daily GPS LOD estimates from the International GNSS Service (IGS) are combined with the VLBI UT1-TAI by modeling the natural excitation of LOD as the integral of a white noise process (i.e., as a random walk) and the UT1 variations as the integration of LOD, similar to the method described by Morabito et al. (1988). To account for GPS technique errors, which express themselves mostly as temporally correlated biases in the LOD measurements, a Gauss-Markov model has been added to assimilate the IGS data, together with a fortnightly sinusoidal term to capture errors in the IGS treatments of tidal effects. Evaluated against independent atmospheric and oceanic axial angular momentum (AAM + OAM) excitations and compared to other UT1/LOD combinations, ours performs best overall in terms of lowest RMS residual and highest correlation with (AAM + OAM) over sliding intervals down to 3 d. The IERS 05C04 and Bulletin A combinations show strong high-frequency smoothing and other problems. Until modified, the JPL SPACE series suffered in the high frequencies from not including any GPS-based LODs. We find, surprisingly, that further improvements are possible in the Kalman filter combination by selective rejection of some VLBI data. The best combined results are obtained by excluding all the 1-hr single-baseline UT1 data as well as those 24-hr UT1 measurements with formal errors greater than 5 μs (about 18% of the multi-baseline sessions). A rescaling of the VLBI formal errors, rather than rejection, was not an effective strategy. These results suggest that the UT1 errors of the 1-hr and weaker 24-hr VLBI sessions are non-Gaussian and more heterogeneous than expected, possibly due to the diversity of observing geometries used, other neglected systematic effects, or to the much shorter observational averaging interval of the single-baseline sessions. UT1 prediction services could benefit from better handling of VLBI inputs together with proper assimilation of IGS LOD products, including using the Ultra-rapid series that is updated four times daily with 15 hr delay.

  10. Reproducibility of the six-minute walking test in chronic heart failure patients.

    PubMed

    Pinna, G D; Opasich, C; Mazza, A; Tangenti, A; Maestri, R; Sanarico, M

    2000-11-30

    The six-minute walking test (WT) is used in trials and clinical practice as an easy tool to evaluate the functional capacity of chronic heart failure (CHF) patients. As WT measurements are highly variable both between and within individuals, this study aims at assessing the contribution of the different sources of variation and estimating the reproducibility of the test. A statistical model describing WT measurements as a function of fixed and random effects is proposed and its parameters estimated. We considered 202 stable CHF patients who performed two baseline WTs separated by a 30 minute rest; 49 of them repeated the two tests 3 months later (follow-up control). They had no changes in therapy or major clinical events. Another 31 subjects performed two baseline tests separated by 24 hours. Collected data were analysed using a mixed model methodology. There was no significant difference between measurements taken 30 minutes and 24 hours apart (p = 0.99). A trend effect of 17 (1.4) m (mean (SE)) was consistently found between duplicate tests (p < 0.001). REML estimates of variance components were: 5189 (674) for subject differences in the error-free value; 1280 (304) for subject differences in spontaneous clinical evolution between baseline and follow-up control, and 266 (23) for the within-subject error. Hence, the standard error of measurement was 16.3 m, namely 4 per cent of the average WT performance (403 m) in this sample. The intraclass correlation coefficient was 0.96. We conclude that WT measurements are characterized by good intrasubject reproducibility and excellent reliability. When follow-up studies > or = 3 months are performed, unpredictable changes in individual walking performance due to spontaneous clinical evolution are to be expected. Their clinical significance, however, is not known. Copyright 2000 John Wiley & Sons, Ltd.

  11. Determination of GPS orbits to submeter accuracy

    NASA Technical Reports Server (NTRS)

    Bertiger, W. I.; Lichten, S. M.; Katsigris, E. C.

    1988-01-01

    Orbits for satellites of the Global Positioning System (GPS) were determined with submeter accuracy. Tests used to assess orbital accuracy include orbit comparisons from independent data sets, orbit prediction, ground baseline determination, and formal errors. One satellite tracked 8 hours each day shows rms error below 1 m even when predicted more than 3 days outside of a 1-week data arc. Differential tracking of the GPS satellites in high Earth orbit provides a powerful relative positioning capability, even when a relatively small continental U.S. fiducial tracking network is used with less than one-third of the full GPS constellation. To demonstrate this capability, baselines of up to 2000 km in North America were also determined with the GPS orbits. The 2000 km baselines show rms daily repeatability of 0.3 to 2 parts in 10 to the 8th power and agree with very long base interferometry (VLBI) solutions at the level of 1.5 parts in 10 to the 8th power. This GPS demonstration provides an opportunity to test different techniques for high-accuracy orbit determination for high Earth orbiters. The best GPS orbit strategies included data arcs of at least 1 week, process noise models for tropospheric fluctuations, estimation of GPS solar pressure coefficients, and combine processing of GPS carrier phase and pseudorange data. For data arc of 2 weeks, constrained process noise models for GPS dynamic parameters significantly improved the situation.

  12. Factors that enhance English-speaking speech-language pathologists' transcription of Cantonese-speaking children's consonants.

    PubMed

    Lockart, Rebekah; McLeod, Sharynne

    2013-08-01

    To investigate speech-language pathology students' ability to identify errors and transcribe typical and atypical speech in Cantonese, a nonnative language. Thirty-three English-speaking speech-language pathology students completed 3 tasks in an experimental within-subjects design. Task 1 (baseline) involved transcribing English words. In Task 2, students transcribed 25 words spoken by a Cantonese adult. An average of 59.1% consonants was transcribed correctly (72.9% when Cantonese-English transfer patterns were allowed). There was higher accuracy on shared English and Cantonese syllable-initial consonants /m,n,f,s,h,j,w,l/ and syllable-final consonants. In Task 3, students identified consonant errors and transcribed 100 words spoken by Cantonese-speaking children under 4 additive conditions: (1) baseline, (2) +adult model, (3) +information about Cantonese phonology, and (4) all variables (2 and 3 were counterbalanced). There was a significant improvement in the students' identification and transcription scores for conditions 2, 3, and 4, with a moderate effect size. Increased skill was not based on listeners' proficiency in speaking another language, perceived transcription skill, musicality, or confidence with multilingual clients. Speech-language pathology students, with no exposure to or specific training in Cantonese, have some skills to identify errors and transcribe Cantonese. Provision of a Cantonese-adult model and information about Cantonese phonology increased students' accuracy in transcribing Cantonese speech.

  13. Exploration of Two Training Paradigms Using Forced Induced Weight Shifting With the Tethered Pelvic Assist Device to Reduce Asymmetry in Individuals After Stroke: Case Reports.

    PubMed

    Bishop, Lauri; Khan, Moiz; Martelli, Dario; Quinn, Lori; Stein, Joel; Agrawal, Sunil

    2017-10-01

    Many robotic devices in rehabilitation incorporate an assist-as-needed haptic guidance paradigm to promote training. This error reduction model, while beneficial for skill acquisition, could be detrimental for long-term retention. Error augmentation (EA) models have been explored as alternatives. A robotic Tethered Pelvic Assist Device has been developed to study force application to the pelvis on gait and was used here to induce weight shift onto the paretic (error reduction) or nonparetic (error augmentation) limb during treadmill training. The purpose of these case reports is to examine effects of training with these two paradigms to reduce load force asymmetry during gait in two individuals after stroke (>6 mos). Participants presented with baseline gait asymmetry, although independent community ambulators. Participants underwent 1-hr trainings for 3 days using either the error reduction or error augmentation model. Outcomes included the Borg rating of perceived exertion scale for treatment tolerance and measures of force and stance symmetry. Both participants tolerated training. Force symmetry (measured on treadmill) improved from pretraining to posttraining (36.58% and 14.64% gains), however, with limited transfer to overground gait measures (stance symmetry gains of 9.74% and 16.21%). Training with the Tethered Pelvic Assist Device device proved feasible to improve force symmetry on the treadmill irrespective of training model. Future work should consider methods to increase transfer to overground gait.

  14. Missing continuous outcomes under covariate dependent missingness in cluster randomised trials

    PubMed Central

    Diaz-Ordaz, Karla; Bartlett, Jonathan W

    2016-01-01

    Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group. PMID:27177885

  15. Missing continuous outcomes under covariate dependent missingness in cluster randomised trials.

    PubMed

    Hossain, Anower; Diaz-Ordaz, Karla; Bartlett, Jonathan W

    2017-06-01

    Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group.

  16. A Psychological Model for Aggregating Judgments of Magnitude

    NASA Astrophysics Data System (ADS)

    Merkle, Edgar C.; Steyvers, Mark

    In this paper, we develop and illustrate a psychologically-motivated model for aggregating judgments of magnitude across experts. The model assumes that experts' judgments are perturbed from the truth by both systematic biases and random error, and it provides aggregated estimates that are implicitly based on the application of nonlinear weights to individual judgments. The model is also easily extended to situations where experts report multiple quantile judgments. We apply the model to expert judgments concerning flange leaks in a chemical plant, illustrating its use and comparing it to baseline measures.

  17. Evaluation of in-vivo measurement errors associated with micro-computed tomography scans by means of the bone surface distance approach.

    PubMed

    Lu, Yongtao; Boudiffa, Maya; Dall'Ara, Enrico; Bellantuono, Ilaria; Viceconti, Marco

    2015-11-01

    In vivo micro-computed tomography (µCT) scanning is an important tool for longitudinal monitoring of the bone adaptation process in animal models. However, the errors associated with the usage of in vivo µCT measurements for the evaluation of bone adaptations remain unclear. The aim of this study was to evaluate the measurement errors using the bone surface distance approach. The right tibiae of eight 14-week-old C57BL/6 J female mice were consecutively scanned four times in an in vivo µCT scanner using a nominal isotropic image voxel size (10.4 µm) and the tibiae were repositioned between each scan. The repeated scan image datasets were aligned to the corresponding baseline (first) scan image dataset using rigid registration and a region of interest was selected in the proximal tibia metaphysis for analysis. The bone surface distances between the repeated and the baseline scan datasets were evaluated. It was found that the average (±standard deviation) median and 95th percentile bone surface distances were 3.10 ± 0.76 µm and 9.58 ± 1.70 µm, respectively. This study indicated that there were inevitable errors associated with the in vivo µCT measurements of bone microarchitecture and these errors should be taken into account for a better interpretation of bone adaptations measured with in vivo µCT. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  18. The influence of phonological context on the sound errors of a speaker with Wernicke's aphasia.

    PubMed

    Goldmann, R E; Schwartz, M F; Wilshire, C E

    2001-09-01

    A corpus of phonological errors produced in narrative speech by a Wernicke's aphasic speaker (R.W.B.) was tested for context effects using two new methods for establishing chance baselines. A reliable anticipatory effect was found using the second method, which estimated chance from the distance between phoneme repeats in the speech sample containing the errors. Relative to this baseline, error-source distances were shorter than expected for anticipations, but not perseverations. R.W.B.'s anticipation/perseveration ratio measured intermediate between a nonaphasic error corpus and that of a more severe aphasic speaker (both reported in Schwartz et al., 1994), supporting the view that the anticipatory bias correlates to severity. Finally, R.W.B's anticipations favored word-initial segments, although errors and sources did not consistently share word or syllable position. Copyright 2001 Academic Press.

  19. Estimation of stream temperature in support of fish production modeling under future climates in the Klamath River Basin

    USGS Publications Warehouse

    Flint, Lorraine E.; Flint, Alan L.

    2012-01-01

    Stream temperature estimates under future climatic conditions were needed in support of fish production modeling for evaluation of effects of dam removal in the Klamath River Basin. To allow for the persistence of the Klamath River salmon fishery, an upcoming Secretarial Determination in 2012 will review potential changes in water quality and stream temperature to assess alternative scenarios, including dam removal. Daily stream temperature models were developed by using a regression model approach with simulated net solar radiation, vapor density deficit calculated on the basis of air temperature, and mean daily air temperature. Models were calibrated for 6 streams in the Lower, and 18 streams in the Upper, Klamath Basin by using measured stream temperatures for 1999-2008. The standard error of the y-estimate for the estimation of stream temperature for the 24 streams ranged from 0.36 to 1.64°C, with an average error of 1.12°C for all streams. The regression models were then used with projected air temperatures to estimate future stream temperatures for 2010-99. Although the mean change from the baseline historical period of 1950-99 to the projected future period of 2070-99 is only 1.2°C, it ranges from 3.4°C for the Shasta River to no change for Fall Creek and Trout Creek. Variability is also evident in the future with a mean change in temperature for all streams from the baseline period to the projected period of 2070-99 of only 1°C, while the range in stream temperature change is from 0 to 2.1°C. The baseline period, 1950-99, to which the air temperature projections were corrected, established the starting point for the projected changes in air temperature. The average measured daily air temperature for the calibration period 1999-2008, however, was found to be as much as 2.3°C higher than baseline for some rivers, indicating that warming conditions have already occurred in many areas of the Klamath River Basin, and that the stream temperature projections for the 21st century could be underestimating the actual change.

  20. Automated time series forecasting for biosurveillance.

    PubMed

    Burkom, Howard S; Murphy, Sean Patrick; Shmueli, Galit

    2007-09-30

    For robust detection performance, traditional control chart monitoring for biosurveillance is based on input data free of trends, day-of-week effects, and other systematic behaviour. Time series forecasting methods may be used to remove this behaviour by subtracting forecasts from observations to form residuals for algorithmic input. We describe three forecast methods and compare their predictive accuracy on each of 16 authentic syndromic data streams. The methods are (1) a non-adaptive regression model using a long historical baseline, (2) an adaptive regression model with a shorter, sliding baseline, and (3) the Holt-Winters method for generalized exponential smoothing. Criteria for comparing the forecasts were the root-mean-square error, the median absolute per cent error (MedAPE), and the median absolute deviation. The median-based criteria showed best overall performance for the Holt-Winters method. The MedAPE measures over the 16 test series averaged 16.5, 11.6, and 9.7 for the non-adaptive regression, adaptive regression, and Holt-Winters methods, respectively. The non-adaptive regression forecasts were degraded by changes in the data behaviour in the fixed baseline period used to compute model coefficients. The mean-based criterion was less conclusive because of the effects of poor forecasts on a small number of calendar holidays. The Holt-Winters method was also most effective at removing serial autocorrelation, with most 1-day-lag autocorrelation coefficients below 0.15. The forecast methods were compared without tuning them to the behaviour of individual series. We achieved improved predictions with such tuning of the Holt-Winters method, but practical use of such improvements for routine surveillance will require reliable data classification methods.

  1. Modeling the response of a monopulse radar to impulsive jamming signals using the Block Oriented System Simulator (BOSS)

    NASA Astrophysics Data System (ADS)

    Long, Jeffrey K.

    1989-09-01

    This theses developed computer models of two types of amplitude comparison monopulse processors using the Block Oriented System Simulation (BOSS) software package and to determine the response to these models to impulsive input signals. This study was an effort to determine the susceptibility of monopulse tracking radars to impulsing jamming signals. Two types of amplitude comparison monopulse receivers were modeled, one using logarithmic amplifiers and the other using automatic gain control for signal normalization. Simulations of both types of systems were run under various conditions of gain or frequency imbalance between the two receiver channels. The resulting errors from the imbalanced simulations were compared to the outputs of similar, baseline simulations which had no electrical imbalances. The accuracy of both types of processors was directly affected by gain or frequency imbalances in their receiver channels. In most cases, it was possible to generate both positive and negative angular errors, dependent upon the type and degree of mismatch between the channels. The system most susceptible to induced errors was a frequency imbalanced processor which used AGC circuitry. Any errors introduced will be a function of the degree of mismatch between the channels and therefore would be difficult to exploit reliably.

  2. Does manipulating the speed of visual flow in virtual reality change distance estimation while walking in Parkinson's disease?

    PubMed

    Ehgoetz Martens, Kaylena A; Ellard, Colin G; Almeida, Quincy J

    2015-03-01

    Although dopaminergic replacement therapy is believed to improve sensory processing in PD, while delayed perceptual speed is thought to be caused by a predominantly cholinergic deficit, it is unclear whether sensory-perceptual deficits are a result of corrupt sensory processing, or a delay in updating perceived feedback during movement. The current study aimed to examine these two hypotheses by manipulating visual flow speed and dopaminergic medication to examine which influenced distance estimation in PD. Fourteen PD and sixteen HC participants were instructed to estimate the distance of a remembered target by walking to the position the target formerly occupied. This task was completed in virtual reality in order to manipulate the visual flow (VF) speed in real time. Three conditions were carried out: (1) BASELINE: VF speed was equal to participants' real-time movement speed; (2) SLOW: VF speed was reduced by 50 %; (2) FAST: VF speed was increased by 30 %. Individuals with PD performed the experiment in their ON and OFF state. PD demonstrated significantly greater judgement error during BASELINE and FAST conditions compared to HC, although PD did not improve their judgement error during the SLOW condition. Additionally, PD had greater variable error during baseline compared to HC; however, during the SLOW conditions, PD had significantly less variable error compared to baseline and similar variable error to HC participants. Overall, dopaminergic medication did not significantly influence judgement error. Therefore, these results suggest that corrupt processing of sensory information is the main contributor to sensory-perceptual deficits during movement in PD rather than delayed updating of sensory feedback.

  3. Influence of age, sex, technique, and exercise program on movement patterns after an anterior cruciate ligament injury prevention program in youth soccer players.

    PubMed

    DiStefano, Lindsay J; Padua, Darin A; DiStefano, Michael J; Marshall, Stephen W

    2009-03-01

    Anterior cruciate ligament (ACL) injury prevention programs show promising results with changing movement; however, little information exists regarding whether a program designed for an individual's movements may be effective or how baseline movements may affect outcomes. A program designed to change specific movements would be more effective than a "one-size-fits-all" program. Greatest improvement would be observed among individuals with the most baseline error. Subjects of different ages and sexes respond similarly. Randomized controlled trial; Level of evidence, 1. One hundred seventy-three youth soccer players from 27 teams were randomly assigned to a generalized or stratified program. Subjects were videotaped during jump-landing trials before and after the program and were assessed using the Landing Error Scoring System (LESS), which is a valid clinical movement analysis tool. A high LESS score indicates more errors. Generalized players performed the same exercises, while the stratified players performed exercises to correct their initial movement errors. Change scores were compared between groups of varying baseline errors, ages, sexes, and programs. Subjects with the highest baseline LESS score improved the most (95% CI, -3.4 to -2.0). High school subjects (95% CI, -1.7 to -0.98) improved their technique more than pre-high school subjects (95% CI, -1.0 to -0.4). There was no difference between the programs or sexes. Players with the greatest amount of movement errors experienced the most improvement. A program's effectiveness may be enhanced if this population is targeted.

  4. How many drinks did you have on September 11, 2001?

    PubMed

    Perrine, M W Bud; Schroder, Kerstin E E

    2005-07-01

    This study tested the predictability of error in retrospective self-reports of alcohol consumption on September 11, 2001, among 80 Vermont light, medium and heavy drinkers. Subjects were 52 men and 28 women participating in daily self-reports of alcohol consumption for a total of 2 years, collected via interactive voice response technology (IVR). In addition, retrospective self-reports of alcohol consumption on September 11, 2001, were collected by telephone interview 4-5 days following the terrorist attacks. Retrospective error was calculated as the difference between the IVR self-report of drinking behavior on September 11 and the retrospective self-report collected by telephone interview. Retrospective error was analyzed as a function of gender and baseline drinking behavior during the 365 days preceding September 11, 2001 (termed "the baseline"). The intraclass correlation (ICC) between daily IVR and retrospective self-reports of alcohol consumption on September 11 was .80. Women provided, on average, more accurate self-reports (ICC = .96) than men (ICC = .72) but displayed more underreporting bias in retrospective responses. Amount and individual variability of alcohol consumption during the 1-year baseline explained, on average, 11% of the variance in overreporting (r = .33), 9% of the variance in underreporting (r = .30) and 25% of the variance in the overall magnitude of error (r = .50), with correlations up to .62 (r2 = .38). The size and direction of error were clearly predictable from the amount and variation in drinking behavior during the 1-year baseline period. The results demonstrate the utility and detail of information that can be derived from daily IVR self-reports in the analysis of retrospective error.

  5. Improving Energy Use Forecast for Campus Micro-grids using Indirect Indicators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aman, Saima; Simmhan, Yogesh; Prasanna, Viktor K.

    2011-12-11

    The rising global demand for energy is best addressed by adopting and promoting sustainable methods of power consumption. We employ an informatics approach towards forecasting the energy consumption patterns in a university campus micro-grid which can be used for energy use planning and conservation. We use novel indirect indicators of energy that are commonly available to train regression tree models that can predict campus and building energy use for coarse (daily) and fine (15-min) time intervals, utilizing 3 years of sensor data collected at 15min intervals from 170 smart power meters. We analyze the impact of individual features used inmore » the models to identify the ones best suited for the application. Our models show a high degree of accuracy with CV-RMSE errors ranging from 7.45% to 19.32%, and a reduction in error from baseline models by up to 53%.« less

  6. Geodesy by radio interferometry: Water vapor radiometry for estimation of the wet delay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elgered, G.; Davis, J.L.; Herring, T.A.

    1991-04-10

    An important source of error in very-long-baseline interferometry (VLBI) estimates of baseline length is unmodeled variations of the refractivity of the neutral atmosphere along the propagation path of the radio signals. The authors present and discuss the method of using data from a water vapor readiometer (WVR) to correct for the propagation delay caused by atmospheric water vapor, the major cause of these variations. Data from different WVRs are compared with estimated propagation delays obtained by Kalman filtering of the VLBI data themselves. The consequences of using either WVR data of Kalman filtering to correct for atmospheric propagation delay atmore » the Onsala VLBI site are investigated by studying the repeatability of estimated baseline lengths from Onsala to several other sites. The lengths of the baselines range from 919 to 7,941 km. The repeatability obtained for baseline length estimates shows that the methods of water vapor radiometry and Kalman filtering offer comparable accuracies when applied to VLBI observations obtained in the climate of the Swedish west coast. The use of WVR data yielded a 13% smaller weighted-root-mean-square (WRMS) scatter of the baseline length estimates compared to the use of a Kalman filter. It is also clear that the best minimum elevation angle for VLBI observations depends on the accuracy of the determinations of the total propagation delay to be used, since the error in this delay increases with increasing air mass. For use of WVR data along with accurate determinations of total surface pressure, the best minimum is about 20{degrees}; for use of a model for the wet delay based on the humidity and temperature at the ground, the best minimum is about 35{degrees}.« less

  7. Forest Structure Characterization Using JPL's UAVSAR Multi-Baseline Polarimetric SAR Interferometry and Tomography

    NASA Technical Reports Server (NTRS)

    Neumann, Maxim; Hensley, Scott; Lavalle, Marco; Ahmed, Razi

    2013-01-01

    This paper concerns forest remote sensing using JPL's multi-baseline polarimetric interferometric UAVSAR data. It presents exemplary results and analyzes the possibilities and limitations of using SAR Tomography and Polarimetric SAR Interferometry (PolInSAR) techniques for the estimation of forest structure. Performance and error indicators for the applicability and reliability of the used multi-baseline (MB) multi-temporal (MT) PolInSAR random volume over ground (RVoG) model are discussed. Experimental results are presented based on JPL's L-band repeat-pass polarimetric interferometric UAVSAR data over temperate and tropical forest biomes in the Harvard Forest, Massachusetts, and in the La Amistad Park, Panama and Costa Rica. The results are partially compared with ground field measurements and with air-borne LVIS lidar data.

  8. Forest Structure Characterization Using Jpl's UAVSAR Multi-Baseline Polarimetric SAR Interferometry and Tomography

    NASA Technical Reports Server (NTRS)

    Neumann, Maxim; Hensley, Scott; Lavalle, Marco; Ahmed, Razi

    2013-01-01

    This paper concerns forest remote sensing using JPL's multi-baseline polarimetric interferometric UAVSAR data. It presents exemplary results and analyzes the possibilities and limitations of using SAR Tomography and Polarimetric SAR Interferometry (PolInSAR) techniques for the estimation of forest structure. Performance and error indicators for the applicability and reliability of the used multi-baseline (MB) multi-temporal (MT) PolInSAR random volume over ground (RVoG) model are discussed. Experimental results are presented based on JPL's L-band repeat-pass polarimetric interferometric UAVSAR data over temperate and tropical forest biomes in the Harvard Forest, Massachusetts, and in the La Amistad Park, Panama and Costa Rica. The results are partially compared with ground field measurements and with air-borne LVIS lidar data.

  9. Predictive modeling of respiratory tumor motion for real-time prediction of baseline shifts

    NASA Astrophysics Data System (ADS)

    Balasubramanian, A.; Shamsuddin, R.; Prabhakaran, B.; Sawant, A.

    2017-03-01

    Baseline shifts in respiratory patterns can result in significant spatiotemporal changes in patient anatomy (compared to that captured during simulation), in turn, causing geometric and dosimetric errors in the administration of thoracic and abdominal radiotherapy. We propose predictive modeling of the tumor motion trajectories for predicting a baseline shift ahead of its occurrence. The key idea is to use the features of the tumor motion trajectory over a 1 min window, and predict the occurrence of a baseline shift in the 5 s that immediately follow (lookahead window). In this study, we explored a preliminary trend-based analysis with multi-class annotations as well as a more focused binary classification analysis. In both analyses, a number of different inter-fraction and intra-fraction training strategies were studied, both offline as well as online, along with data sufficiency and skew compensation for class imbalances. The performance of different training strategies were compared across multiple machine learning classification algorithms, including nearest neighbor, Naïve Bayes, linear discriminant and ensemble Adaboost. The prediction performance is evaluated using metrics such as accuracy, precision, recall and the area under the curve (AUC) for repeater operating characteristics curve. The key results of the trend-based analysis indicate that (i) intra-fraction training strategies achieve highest prediction accuracies (90.5-91.4%) (ii) the predictive modeling yields lowest accuracies (50-60%) when the training data does not include any information from the test patient; (iii) the prediction latencies are as low as a few hundred milliseconds, and thus conducive for real-time prediction. The binary classification performance is promising, indicated by high AUCs (0.96-0.98). It also confirms the utility of prior data from previous patients, and also the necessity of training the classifier on some initial data from the new patient for reasonable prediction performance. The ability to predict a baseline shift with a sufficient look-ahead window will enable clinical systems or even human users to hold the treatment beam in such situations, thereby reducing the probability of serious geometric and dosimetric errors.

  10. Predictive modeling of respiratory tumor motion for real-time prediction of baseline shifts

    PubMed Central

    Balasubramanian, A; Shamsuddin, R; Prabhakaran, B; Sawant, A

    2017-01-01

    Baseline shifts in respiratory patterns can result in significant spatiotemporal changes in patient anatomy (compared to that captured during simulation), in turn, causing geometric and dosimetric errors in the administration of thoracic and abdominal radiotherapy. We propose predictive modeling of the tumor motion trajectories for predicting a baseline shift ahead of its occurrence. The key idea is to use the features of the tumor motion trajectory over a 1 min window, and predict the occurrence of a baseline shift in the 5 s that immediately follow (lookahead window). In this study, we explored a preliminary trend-based analysis with multi-class annotations as well as a more focused binary classification analysis. In both analyses, a number of different inter-fraction and intra-fraction training strategies were studied, both offline as well as online, along with data sufficiency and skew compensation for class imbalances. The performance of different training strategies were compared across multiple machine learning classification algorithms, including nearest neighbor, Naïve Bayes, linear discriminant and ensemble Adaboost. The prediction performance is evaluated using metrics such as accuracy, precision, recall and the area under the curve (AUC) for repeater operating characteristics curve. The key results of the trend-based analysis indicate that (i) intra-fraction training strategies achieve highest prediction accuracies (90.5–91.4%); (ii) the predictive modeling yields lowest accuracies (50–60%) when the training data does not include any information from the test patient; (iii) the prediction latencies are as low as a few hundred milliseconds, and thus conducive for real-time prediction. The binary classification performance is promising, indicated by high AUCs (0.96–0.98). It also confirms the utility of prior data from previous patients, and also the necessity of training the classifier on some initial data from the new patient for reasonable prediction performance. The ability to predict a baseline shift with a sufficient lookahead window will enable clinical systems or even human users to hold the treatment beam in such situations, thereby reducing the probability of serious geometric and dosimetric errors. PMID:28075331

  11. Predictive modeling of respiratory tumor motion for real-time prediction of baseline shifts.

    PubMed

    Balasubramanian, A; Shamsuddin, R; Prabhakaran, B; Sawant, A

    2017-03-07

    Baseline shifts in respiratory patterns can result in significant spatiotemporal changes in patient anatomy (compared to that captured during simulation), in turn, causing geometric and dosimetric errors in the administration of thoracic and abdominal radiotherapy. We propose predictive modeling of the tumor motion trajectories for predicting a baseline shift ahead of its occurrence. The key idea is to use the features of the tumor motion trajectory over a 1 min window, and predict the occurrence of a baseline shift in the 5 s that immediately follow (lookahead window). In this study, we explored a preliminary trend-based analysis with multi-class annotations as well as a more focused binary classification analysis. In both analyses, a number of different inter-fraction and intra-fraction training strategies were studied, both offline as well as online, along with data sufficiency and skew compensation for class imbalances. The performance of different training strategies were compared across multiple machine learning classification algorithms, including nearest neighbor, Naïve Bayes, linear discriminant and ensemble Adaboost. The prediction performance is evaluated using metrics such as accuracy, precision, recall and the area under the curve (AUC) for repeater operating characteristics curve. The key results of the trend-based analysis indicate that (i) intra-fraction training strategies achieve highest prediction accuracies (90.5-91.4%); (ii) the predictive modeling yields lowest accuracies (50-60%) when the training data does not include any information from the test patient; (iii) the prediction latencies are as low as a few hundred milliseconds, and thus conducive for real-time prediction. The binary classification performance is promising, indicated by high AUCs (0.96-0.98). It also confirms the utility of prior data from previous patients, and also the necessity of training the classifier on some initial data from the new patient for reasonable prediction performance. The ability to predict a baseline shift with a sufficient look-ahead window will enable clinical systems or even human users to hold the treatment beam in such situations, thereby reducing the probability of serious geometric and dosimetric errors.

  12. Higher mental workload is associated with poorer laparoscopic performance as measured by the NASA-TLX tool.

    PubMed

    Yurko, Yuliya Y; Scerbo, Mark W; Prabhu, Ajita S; Acker, Christina E; Stefanidis, Dimitrios

    2010-10-01

    Increased workload during task performance may increase fatigue and facilitate errors. The National Aeronautics and Space Administration-Task Load Index (NASA-TLX) is a previously validated tool for workload self-assessment. We assessed the relationship of workload and performance during simulator training on a complex laparoscopic task. NASA-TLX workload data from three separate trials were analyzed. All participants were novices (n = 28), followed the same curriculum on the fundamentals of laparoscopic surgery suturing model, and were tested in the animal operating room (OR) on a Nissen fundoplication model after training. Performance and workload scores were recorded at baseline, after proficiency achievement, and during the test. Performance, NASA-TLX scores, and inadvertent injuries during the test were analyzed and compared. Workload scores declined during training and mirrored performance changes. NASA-TLX scores correlated significantly with performance scores (r = -0.5, P < 0.001). Participants with higher workload scores caused more inadvertent injuries to adjacent structures in the OR (r = 0.38, P < 0.05). Increased mental and physical workload scores at baseline correlated with higher workload scores in the OR (r = 0.52-0.82; P < 0.05) and more inadvertent injuries (r = 0.52, P < 0.01). Increased workload is associated with inferior task performance and higher likelihood of errors. The NASA-TLX questionnaire accurately reflects workload changes during simulator training and may identify individuals more likely to experience high workload and more prone to errors during skill transfer to the clinical environment.

  13. Modeling Pumped Thermal Energy Storage with Waste Heat Harvesting

    NASA Astrophysics Data System (ADS)

    Abarr, Miles L. Lindsey

    This work introduces a new concept for a utility scale combined energy storage and generation system. The proposed design utilizes a pumped thermal energy storage (PTES) system, which also utilizes waste heat leaving a natural gas peaker plant. This system creates a low cost utility-scale energy storage system by leveraging this dual-functionality. This dissertation first presents a review of previous work in PTES as well as the details of the proposed integrated bottoming and energy storage system. A time-domain system model was developed in Mathworks R2016a Simscape and Simulink software to analyze this system. Validation of both the fluid state model and the thermal energy storage model are provided. The experimental results showed the average error in cumulative fluid energy between simulation and measurement was +/- 0.3% per hour. Comparison to a Finite Element Analysis (FEA) model showed <1% error for bottoming mode heat transfer. The system model was used to conduct sensitivity analysis, baseline performance, and levelized cost of energy of a recently proposed Pumped Thermal Energy Storage and Bottoming System (Bot-PTES) that uses ammonia as the working fluid. This analysis focused on the effects of hot thermal storage utilization, system pressure, and evaporator/condenser size on the system performance. This work presents the estimated performance for a proposed baseline Bot-PTES. Results of this analysis showed that all selected parameters had significant effects on efficiency, with the evaporator/condenser size having the largest effect over the selected ranges. Results for the baseline case showed stand-alone energy storage efficiencies between 51 and 66% for varying power levels and charge states, and a stand-alone bottoming efficiency of 24%. The resulting efficiencies for this case were low compared to competing technologies; however, the dual-functionality of the Bot-PTES enables it to have higher capacity factor, leading to 91-197/MWh levelized cost of energy compared to 262-284/MWh for batteries and $172-254/MWh for Compressed Air Energy Storage.

  14. The effect of tracking network configuration on Global Positioning System (GPS) baseline estimates for the CASA (Central and South America) Uno experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, S.K.; Dixon, T.H.; Freymueller, J.T.

    1990-04-01

    Geodetic monitoring of subduction of the Nazca and Cocos plates is a goal of the CASA (Central and South America) Global Positioning System (GPS) experiments, and requires measurement of intersite distances (baselines) in excess of 500 km. The major error source in these measurements is the uncertainty in the position of the GPS satellites at the time of observation. A key aspect of the first CASA experiment, CASA Uno, was the initiation of a global network of tracking stations minimize these errors. The authors studied the effect of using various subsets of this global tracking network on long (>100 km)more » baseline estimates in the CASA region. Best results were obtained with a global tracking network consisting of three U.S. fiducial stations, two sites in the southwest pacific and two sites in Europe. Relative to smaller subsets, this global network improved baseline repeatability, resolution of carrier phase cycle ambiguities, and formal errors of the orbit estimates. Describing baseline repeatability for horizontal components as {sigma}=(a{sup 2} + b{sup 2}L{sup 2}){sup 1/2} where L is baseline length, the authors obtained a = 4 and 9 mm and b = 2.8{times}10{sup {minus}8} and 2.3{times}10{sup {minus}8} for north and east components, respectively, on CASA baselines up to 1,000 km in length with this global network.« less

  15. Impacts of Satellite Orbit and Clock on Real-Time GPS Point and Relative Positioning.

    PubMed

    Shi, Junbo; Wang, Gaojing; Han, Xianquan; Guo, Jiming

    2017-06-12

    Satellite orbit and clock corrections are always treated as known quantities in GPS positioning models. Therefore, any error in the satellite orbit and clock products will probably cause significant consequences for GPS positioning, especially for real-time applications. Currently three types of satellite products have been made available for real-time positioning, including the broadcast ephemeris, the International GNSS Service (IGS) predicted ultra-rapid product, and the real-time product. In this study, these three predicted/real-time satellite orbit and clock products are first evaluated with respect to the post-mission IGS final product, which demonstrates cm to m level orbit accuracies and sub-ns to ns level clock accuracies. Impacts of real-time satellite orbit and clock products on GPS point and relative positioning are then investigated using the P3 and GAMIT software packages, respectively. Numerical results show that the real-time satellite clock corrections affect the point positioning more significantly than the orbit corrections. On the contrary, only the real-time orbit corrections impact the relative positioning. Compared with the positioning solution using the IGS final product with the nominal orbit accuracy of ~2.5 cm, the real-time broadcast ephemeris with ~2 m orbit accuracy provided <2 cm relative positioning error for baselines no longer than 216 km. As for the baselines ranging from 574 to 2982 km, the cm-dm level positioning error was identified for the relative positioning solution using the broadcast ephemeris. The real-time product could result in <5 mm relative positioning accuracy for baselines within 2982 km, slightly better than the predicted ultra-rapid product.

  16. Uncertainty quantification of CO₂ saturation estimated from electrical resistance tomography data at the Cranfield site

    DOE PAGES

    Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; ...

    2014-06-03

    A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the priormore » information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a laptop or desktop PC.« less

  17. Rotation Matrix Method Based on Ambiguity Function for GNSS Attitude Determination.

    PubMed

    Yang, Yingdong; Mao, Xuchu; Tian, Weifeng

    2016-06-08

    Global navigation satellite systems (GNSS) are well suited for attitude determination. In this study, we use the rotation matrix method to resolve the attitude angle. This method achieves better performance in reducing computational complexity and selecting satellites. The condition of the baseline length is combined with the ambiguity function method (AFM) to search for integer ambiguity, and it is validated in reducing the span of candidates. The noise error is always the key factor to the success rate. It is closely related to the satellite geometry model. In contrast to the AFM, the LAMBDA (Least-squares AMBiguity Decorrelation Adjustment) method gets better results in solving the relationship of the geometric model and the noise error. Although the AFM is more flexible, it is lack of analysis on this aspect. In this study, the influence of the satellite geometry model on the success rate is analyzed in detail. The computation error and the noise error are effectively treated. Not only is the flexibility of the AFM inherited, but the success rate is also increased. An experiment is conducted in a selected campus, and the performance is proved to be effective. Our results are based on simulated and real-time GNSS data and are applied on single-frequency processing, which is known as one of the challenging case of GNSS attitude determination.

  18. Spatio-temporal modeling and optimization of a deformable-grating compressor for short high-energy laser pulses

    DOE PAGES

    Qiao, Jie; Papa, J.; Liu, X.

    2015-09-24

    Monolithic large-scale diffraction gratings are desired to improve the performance of high-energy laser systems and scale them to higher energy, but the surface deformation of these diffraction gratings induce spatio-temporal coupling that is detrimental to the focusability and compressibility of the output pulse. A new deformable-grating-based pulse compressor architecture with optimized actuator positions has been designed to correct the spatial and temporal aberrations induced by grating wavefront errors. An integrated optical model has been built to analyze the effect of grating wavefront errors on the spatio-temporal performance of a compressor based on four deformable gratings. Moreover, a 1.5-meter deformable gratingmore » has been optimized using an integrated finite-element-analysis and genetic-optimization model, leading to spatio-temporal performance similar to the baseline design with ideal gratings.« less

  19. The Effect of Antenna Position Errors on Redundant-Baseline Calibration of HERA

    NASA Astrophysics Data System (ADS)

    Orosz, Naomi; Dillon, Joshua; Ewall-Wice, Aaron; Parsons, Aaron; HERA Collaboration

    2018-01-01

    HERA (the Hydrogen Epoch of Reionization Array) is a large, highly-redundant radio interferometer in South Africa currently being built out to 350 14-m dishes. Its mission is to probe large scale structure during and prior to the epoch of reionization using the 21 cm hyperfine transition of neutral hydrogen. The array is designed to be calibrated using redundant baselines of known lengths. However, the dishes can deviate from ideal positions, with errors on the order of a few centimeters. This potentially increases foreground contamination of the 21 cm power spectrum in the cleanest part of Fourier space. The calibration algorithm treats groups of baselines that should be redundant, but are not due to position errors, as if they actually are. Accurate, precise calibration is critical because the foreground signals are 100,000 times stronger than the reionization signal. We explain the origin of this effect and discuss weighting strategies to mitigate it.

  20. Analyzing recurrent events when the history of previous episodes is unknown or not taken into account: proceed with caution.

    PubMed

    Navarro, Albert; Casanovas, Georgina; Alvarado, Sergio; Moriña, David

    Researchers in public health are often interested in examining the effect of several exposures on the incidence of a recurrent event. The aim of the present study is to assess how well the common-baseline hazard models perform to estimate the effect of multiple exposures on the hazard of presenting an episode of a recurrent event, in presence of event dependence and when the history of prior-episodes is unknown or is not taken into account. Through a comprehensive simulation study, using specific-baseline hazard models as the reference, we evaluate the performance of common-baseline hazard models by means of several criteria: bias, mean squared error, coverage, confidence intervals mean length and compliance with the assumption of proportional hazards. Results indicate that the bias worsen as event dependence increases, leading to a considerable overestimation of the exposure effect; coverage levels and compliance with the proportional hazards assumption are low or extremely low, worsening with increasing event dependence, effects to be estimated, and sample sizes. Common-baseline hazard models cannot be recommended when we analyse recurrent events in the presence of event dependence. It is important to have access to the history of prior-episodes per subject, it can permit to obtain better estimations of the effects of the exposures. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  1. GNSS Single Frequency, Single Epoch Reliable Attitude Determination Method with Baseline Vector Constraint.

    PubMed

    Gong, Ang; Zhao, Xiubin; Pang, Chunlei; Duan, Rong; Wang, Yong

    2015-12-02

    For Global Navigation Satellite System (GNSS) single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA) method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.

  2. Clinical time series prediction: Toward a hierarchical dynamical system framework.

    PubMed

    Liu, Zitao; Hauskrecht, Milos

    2015-09-01

    Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Medication safety initiative in reducing medication errors.

    PubMed

    Nguyen, Elisa E; Connolly, Phyllis M; Wong, Vivian

    2010-01-01

    The purpose of the study was to evaluate whether a Medication Pass Time Out initiative was effective and sustainable in reducing medication administration errors. A retrospective descriptive method was used for this research, where a structured Medication Pass Time Out program was implemented following staff and physician education. As a result, the rate of interruptions during the medication administration process decreased from 81% to 0. From the observations at baseline, 6 months, and 1 year after implementation, the percent of doses of medication administered without interruption improved from 81% to 99%. Medication doses administered without errors at baseline, 6 months, and 1 year improved from 98% to 100%.

  4. Earth Orientation Effects on Mobile VLBI Baselines

    NASA Technical Reports Server (NTRS)

    Allen, S. L.

    1984-01-01

    Improvements in data quality for the mobile VLBI systems have placed higher accuracy requirements on Earth orientation calibrations. Errors in these calibrations may give rise to systematic effects in the nonlength components of the baselines. Various sources of Earth orientation data were investigated for calibration of Mobile VLBI baselines. Significant differences in quality between the several available sources of UT1-UTC were found. It was shown that the JPL Kalman filtered space technology data were at least as good as any other and adequate to the needs of current Mobile VLBI systems and observing plans. For polar motion, the values from all service suffice. The effect of Earth orientation errors on the accuracy of differenced baselines was also investigated. It is shown that the effect is negligible for the current mobile systems and observing plan.

  5. Artificial Vector Calibration Method for Differencing Magnetic Gradient Tensor Systems

    PubMed Central

    Li, Zhining; Zhang, Yingtang; Yin, Gang

    2018-01-01

    The measurement error of the differencing (i.e., using two homogenous field sensors at a known baseline distance) magnetic gradient tensor system includes the biases, scale factors, nonorthogonality of the single magnetic sensor, and the misalignment error between the sensor arrays, all of which can severely affect the measurement accuracy. In this paper, we propose a low-cost artificial vector calibration method for the tensor system. Firstly, the error parameter linear equations are constructed based on the single-sensor’s system error model to obtain the artificial ideal vector output of the platform, with the total magnetic intensity (TMI) scalar as a reference by two nonlinear conversions, without any mathematical simplification. Secondly, the Levenberg–Marquardt algorithm is used to compute the integrated model of the 12 error parameters by nonlinear least-squares fitting method with the artificial vector output as a reference, and a total of 48 parameters of the system is estimated simultaneously. The calibrated system outputs along the reference platform-orthogonal coordinate system. The analysis results show that the artificial vector calibrated output can track the orientation fluctuations of TMI accurately, effectively avoiding the “overcalibration” problem. The accuracy of the error parameters’ estimation in the simulation is close to 100%. The experimental root-mean-square error (RMSE) of the TMI and tensor components is less than 3 nT and 20 nT/m, respectively, and the estimation of the parameters is highly robust. PMID:29373544

  6. Creating a Test Validated Structural Dynamic Finite Element Model of the X-56A Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Truong, Samson

    2014-01-01

    Small modeling errors in the finite element model will eventually induce errors in the structural flexibility and mass, thus propagating into unpredictable errors in the unsteady aerodynamics and the control law design. One of the primary objectives of the Multi Utility Technology Test-bed, X-56A aircraft, is the flight demonstration of active flutter suppression, and therefore in this study, the identification of the primary and secondary modes for the structural model tuning based on the flutter analysis of the X-56A aircraft. The ground vibration test-validated structural dynamic finite element model of the X-56A aircraft is created in this study. The structural dynamic finite element model of the X-56A aircraft is improved using a model tuning tool. In this study, two different weight configurations of the X-56A aircraft have been improved in a single optimization run. Frequency and the cross-orthogonality (mode shape) matrix were the primary focus for improvement, while other properties such as center of gravity location, total weight, and offdiagonal terms of the mass orthogonality matrix were used as constraints. The end result was a more improved and desirable structural dynamic finite element model configuration for the X-56A aircraft. Improved frequencies and mode shapes in this study increased average flutter speeds of the X-56A aircraft by 7.6% compared to the baseline model.

  7. Creating a Test-Validated Finite-Element Model of the X-56A Aircraft Structure

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Truong, Samson

    2014-01-01

    Small modeling errors in a finite-element model will eventually induce errors in the structural flexibility and mass, thus propagating into unpredictable errors in the unsteady aerodynamics and the control law design. One of the primary objectives of the X-56A Multi-Utility Technology Testbed aircraft is the flight demonstration of active flutter suppression and, therefore, in this study, the identification of the primary and secondary modes for the structural model tuning based on the flutter analysis of the X-56A aircraft. The ground-vibration test-validated structural dynamic finite-element model of the X-56A aircraft is created in this study. The structural dynamic finite-element model of the X-56A aircraft is improved using a model-tuning tool. In this study, two different weight configurations of the X-56A aircraft have been improved in a single optimization run. Frequency and the cross-orthogonality (mode shape) matrix were the primary focus for improvement, whereas other properties such as c.g. location, total weight, and off-diagonal terms of the mass orthogonality matrix were used as constraints. The end result was an improved structural dynamic finite-element model configuration for the X-56A aircraft. Improved frequencies and mode shapes in this study increased average flutter speeds of the X-56A aircraft by 7.6% compared to the baseline model.

  8. Estimating current and future streamflow characteristics at ungaged sites, central and eastern Montana, with application to evaluating effects of climate change on fish populations

    USGS Publications Warehouse

    Sando, Roy; Chase, Katherine J.

    2017-03-23

    A common statistical procedure for estimating streamflow statistics at ungaged locations is to develop a relational model between streamflow and drainage basin characteristics at gaged locations using least squares regression analysis; however, least squares regression methods are parametric and make constraining assumptions about the data distribution. The random forest regression method provides an alternative nonparametric method for estimating streamflow characteristics at ungaged sites and requires that the data meet fewer statistical conditions than least squares regression methods.Random forest regression analysis was used to develop predictive models for 89 streamflow characteristics using Precipitation-Runoff Modeling System simulated streamflow data and drainage basin characteristics at 179 sites in central and eastern Montana. The predictive models were developed from streamflow data simulated for current (baseline, water years 1982–99) conditions and three future periods (water years 2021–38, 2046–63, and 2071–88) under three different climate-change scenarios. These predictive models were then used to predict streamflow characteristics for baseline conditions and three future periods at 1,707 fish sampling sites in central and eastern Montana. The average root mean square error for all predictive models was about 50 percent. When streamflow predictions at 23 fish sampling sites were compared to nearby locations with simulated data, the mean relative percent difference was about 43 percent. When predictions were compared to streamflow data recorded at 21 U.S. Geological Survey streamflow-gaging stations outside of the calibration basins, the average mean absolute percent error was about 73 percent.

  9. Formal Uncertainty and Dispersion of Single and Double Difference Models for GNSS-Based Attitude Determination.

    PubMed

    Chen, Wen; Yu, Chao; Dong, Danan; Cai, Miaomiao; Zhou, Feng; Wang, Zhiren; Zhang, Lei; Zheng, Zhengqi

    2017-02-20

    With multi-antenna synchronized global navigation satellite system (GNSS) receivers, the single difference (SD) between two antennas is able to eliminate both satellite and receiver clock error, thus it becomes necessary to reconsider the equivalency problem between the SD and double difference (DD) models. In this paper, we quantitatively compared the formal uncertainties and dispersions between multiple SD models and the DD model, and also carried out static and kinematic short baseline experiments. The theoretical and experimental results show that under a non-common clock scheme the SD and DD model are equivalent. Under a common clock scheme, if we estimate stochastic uncalibrated phase delay (UPD) parameters every epoch, this SD model is still equivalent to the DD model, but if we estimate only one UPD parameter for all epochs or take it as a known constant, the SD (here called SD2) and DD models are no longer equivalent. For the vertical component of baseline solutions, the formal uncertainties of the SD2 model are two times smaller than those of the DD model, and the dispersions of the SD2 model are even more than twice smaller than those of the DD model. In addition, to obtain baseline solutions, the SD2 model requires a minimum of three satellites, while the DD model requires a minimum of four satellites, which makes the SD2 more advantageous in attitude determination under sheltered environments.

  10. Formal Uncertainty and Dispersion of Single and Double Difference Models for GNSS-Based Attitude Determination

    PubMed Central

    Chen, Wen; Yu, Chao; Dong, Danan; Cai, Miaomiao; Zhou, Feng; Wang, Zhiren; Zhang, Lei; Zheng, Zhengqi

    2017-01-01

    With multi-antenna synchronized global navigation satellite system (GNSS) receivers, the single difference (SD) between two antennas is able to eliminate both satellite and receiver clock error, thus it becomes necessary to reconsider the equivalency problem between the SD and double difference (DD) models. In this paper, we quantitatively compared the formal uncertainties and dispersions between multiple SD models and the DD model, and also carried out static and kinematic short baseline experiments. The theoretical and experimental results show that under a non-common clock scheme the SD and DD model are equivalent. Under a common clock scheme, if we estimate stochastic uncalibrated phase delay (UPD) parameters every epoch, this SD model is still equivalent to the DD model, but if we estimate only one UPD parameter for all epochs or take it as a known constant, the SD (here called SD2) and DD models are no longer equivalent. For the vertical component of baseline solutions, the formal uncertainties of the SD2 model are two times smaller than those of the DD model, and the dispersions of the SD2 model are even more than twice smaller than those of the DD model. In addition, to obtain baseline solutions, the SD2 model requires a minimum of three satellites, while the DD model requires a minimum of four satellites, which makes the SD2 more advantageous in attitude determination under sheltered environments. PMID:28230753

  11. Effect of Orthokeratology on myopia progression: twelve-year results of a retrospective cohort study.

    PubMed

    Lee, Yueh-Chang; Wang, Jen-Hung; Chiu, Cheng-Jen

    2017-12-08

    Several studies reported the efficacy of orthokeratology for myopia control. Somehow, there is limited publication with follow-up longer than 3 years. This study aims to research whether overnight orthokeratology influences the progression rate of the manifest refractive error of myopic children in a longer follow-up period (up to 12 years). And if changes in progression rate are found, to investigate the relationship between refractive changes and different baseline factors, including refraction error, wearing age and lens replacement frequency. In addition, this study collects long-term safety profile of overnight orthokeratology. This is a retrospective study of sixty-six school-age children who received overnight orthokeratology correction between January 1998 and December 2013. Thirty-six subjects whose baseline age and refractive error matched with those in the orthokeratology group were selected to form control group. These subjects were followed up at least for 12 months. Manifest refractions, cycloplegic refractions, uncorrected and best-corrected visual acuities, power vector of astigmatism, corneal curvature, and lens replacement frequency were obtained for analysis. Data of 203 eyes were derived from 66 orthokeratology subjects (31 males and 35 females) and 36 control subjects (22 males and 14 females) enrolled in this study. Their wearing ages ranged from 7 years to 16 years (mean ± SE, 11.72 ± 0.18 years). The follow-up time ranged from 1 year to 13 years (mean ± SE, 6.32 ± 0.15 years). At baseline, their myopia ranged from -0.5 D to -8.0 D (mean ± SE, -3.70 ± 0.12 D), and astigmatism ranged from 0 D to -3.0 D (mean ± SE, -0.55 ± 0.05 D). Comparing with control group, orthokeratology group had a significantly (p < 0.001) lower trend of refractive error change during the follow-up periods. According to the analysis results of GEE model, greater power of astigmatism was found to be associated with increased change of refractive error during follow-up years. Overnight orthokeratology was effective in slowing myopia progression over a twelve-year follow-up period and demonstrated a clinically acceptable safety profile. Initial higher astigmatism power was found to be associated with increased change of refractive error during follow-up years.

  12. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  13. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....102 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  14. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  15. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  16. Do Work Condition Interventions Affect Quality and Errors in Primary Care? Results from the Healthy Work Place Study.

    PubMed

    Linzer, Mark; Poplau, Sara; Brown, Roger; Grossman, Ellie; Varkey, Anita; Yale, Steven; Williams, Eric S; Hicks, Lanis; Wallock, Jill; Kohnhorst, Diane; Barbouche, Michael

    2017-01-01

    While primary care work conditions are associated with adverse clinician outcomes, little is known about the effect of work condition interventions on quality or safety. A cluster randomized controlled trial of 34 clinics in the upper Midwest and New York City. Primary care clinicians and their diabetic and hypertensive patients. Quality improvement projects to improve communication between providers, workflow design, and chronic disease management. Intervention clinics received brief summaries of their clinician and patient outcome data at baseline. We measured work conditions and clinician and patient outcomes both at baseline and 6-12 months post-intervention. Multilevel regression analyses assessed the impact of work condition changes on outcomes. Subgroup analyses assessed impact by intervention category. There were no significant differences in error reduction (19 % vs. 11 %, OR of improvement 1.84, 95 % CI 0.70, 4.82, p = 0.21) or quality of care improvement (19 % improved vs. 44 %, OR 0.62, 95 % CI 0.58, 1.21, p = 0.42) between intervention and control clinics. The conceptual model linking work conditions, provider outcomes, and error reduction showed significant relationships between work conditions and provider outcomes (p ≤ 0.001) and a trend toward a reduced error rate in providers with lower burnout (OR 1.44, 95 % CI 0.94, 2.23, p = 0.09). Few quality metrics, short time span, fewer clinicians recruited than anticipated. Work-life interventions improving clinician satisfaction and well-being do not necessarily reduce errors or improve quality. Longer, more focused interventions may be needed to produce meaningful improvements in patient care. ClinicalTrials.gov # NCT02542995.

  17. Estimating sizes of faint, distant galaxies in the submillimetre regime

    NASA Astrophysics Data System (ADS)

    Lindroos, L.; Knudsen, K. K.; Fan, L.; Conway, J.; Coppin, K.; Decarli, R.; Drouart, G.; Hodge, J. A.; Karim, A.; Simpson, J. M.; Wardlow, J.

    2016-10-01

    We measure the sizes of redshift ˜2 star-forming galaxies by stacking data from the Atacama Large Millimeter/submillimeter Array (ALMA). We use a uv-stacking algorithm in combination with model fitting in the uv-domain and show that this allows for robust measures of the sizes of marginally resolved sources. The analysis is primarily based on the 344 GHz ALMA continuum observations centred on 88 submillimetre galaxies in the LABOCA ECDFS Submillimeter Survey (ALESS). We study several samples of galaxies at z ≈ 2 with M* ≈ 5 × 1010 M⊙, selected using near-infrared photometry (distant red galaxies, extremely red objects, sBzK-galaxies, and galaxies selected on photometric redshift). We find that the typical sizes of these galaxies are ˜0.6 arcsec which corresponds to ˜5 kpc at z = 2, this agrees well with the median sizes measured in the near-infrared z band (˜0.6 arcsec). We find errors on our size estimates of ˜0.1-0.2 arcsec, which agree well with the expected errors for model fitting at the given signal-to-noise ratio. With the uv-coverage of our observations (18-160 m), the size and flux density measurements are sensitive to scales out to 2 arcsec. We compare this to a simulated ALMA Cycle 3 data set with intermediate length baseline coverage, and we find that, using only these baselines, the measured stacked flux density would be an order of magnitude fainter. This highlights the importance of short baselines to recover the full flux density of high-redshift galaxies.

  18. Bias error reduction using ratios to baseline experiments. Heat transfer case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakroun, W.; Taylor, R.P.; Coleman, H.W.

    1993-10-01

    Employing a set of experiments devoted to examining the effect of surface finish (riblets) on convective heat transfer as an example, this technical note seeks to explore the notion that precision uncertainties in experiments can be reduced by repeated trials and averaging. This scheme for bias error reduction can give considerable advantage when parametric effects are investigated experimentally. When the results of an experiment are presented as a ratio with the baseline results, a large reduction in the overall uncertainty can be achieved when all the bias limits in the variables of the experimental result are fully correlated with thosemore » of the baseline case. 4 refs.« less

  19. Systematic errors in long baseline oscillation experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, Deborah A.; /Fermilab

    This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.

  20. Systematic evaluation of a time-domain Monte Carlo fitting routine to estimate the adult brain optical properties

    NASA Astrophysics Data System (ADS)

    Selb, Juliette; Ogden, Tyler M.; Dubb, Jay; Fang, Qianqian; Boas, David A.

    2013-03-01

    Time-domain near-infrared spectroscopy (TD-NIRS) offers the ability to measure the absolute baseline optical properties of a tissue. Specifically, for brain imaging, the robust assessment of cerebral blood volume and oxygenation based on measurement of cerebral hemoglobin concentrations is essential for reliable cross-sectional and longitudinal studies. In adult heads, these baseline measurements are complicated by the presence of thick extra-cerebral tissue (scalp, skull, CSF). A simple semi-infinite homogeneous model of the head has proven to have limited use because of the large errors it introduces in the recovered brain absorption. Analytical solutions for layered media have shown improved performance on Monte-Carlo simulated data and layered phantom experiments, but their validity on real adult head data has never been demonstrated. With the advance of fast Monte Carlo approaches based on GPU computation, numerical methods to solve the radiative transfer equation become viable alternatives to analytical solutions of the diffusion equation. Monte Carlo approaches provide the additional advantage to be adaptable to any geometry, in particular more realistic head models. The goals of the present study were twofold: (1) to implement a fast and flexible Monte Carlo-based fitting routine to retrieve the brain optical properties; (2) to characterize the performances of this fitting method on realistic adult head data. We generated time-resolved data at various locations over the head, and fitted them with different models of light propagation: the homogeneous analytical model, and Monte Carlo simulations for three head models: a two-layer slab, the true subject's anatomy, and that of a generic atlas head. We found that the homogeneous model introduced a median 20 to 25% error on the recovered brain absorption, with large variations over the range of true optical properties. The two-layer slab model only improved moderately the results over the homogeneous one. On the other hand, using a generic atlas head registered to the subject's head surface decreased the error by a factor of 2. When the information is available, using the true subject anatomy offers the best performance.

  1. Characteristics of BeiDou Navigation Satellite System Multipath and Its Mitigation Method Based on Kalman Filter and Rauch-Tung-Striebel Smoother.

    PubMed

    Zhang, Qiuzhao; Yang, Wei; Zhang, Shubi; Liu, Xin

    2018-01-12

    Global Navigation Satellite System (GNSS) carrier phase measurement for short baseline meets the requirements of deformation monitoring of large structures. However, the carrier phase multipath effect is the main error source with double difference (DD) processing. There are lots of methods to deal with the multipath errors of Global Position System (GPS) carrier phase data. The BeiDou navigation satellite System (BDS) multipath mitigation is still a research hotspot because the unique constellation design of BDS makes it different to mitigate multipath effects compared to GPS. Multipath error periodically repeats for its strong correlation to geometry of satellites, reflective surface and antenna which is also repetitive. We analyzed the characteristics of orbital periods of BDS satellites which are consistent with multipath repeat periods of corresponding satellites. The results show that the orbital periods and multipath periods for BDS geostationary earth orbit (GEO) and inclined geosynchronous orbit (IGSO) satellites are about one day but the periods of MEO satellites are about seven days. The Kalman filter (KF) and Rauch-Tung-Striebel Smoother (RTSS) was introduced to extract the multipath models from single difference (SD) residuals with traditional sidereal filter (SF). Wavelet filter and Empirical mode decomposition (EMD) were also used to mitigate multipath effects. The experimental results show that the three filters methods all have obvious effect on improvement of baseline accuracy and the performance of KT-RTSS method is slightly better than that of wavelet filter and EMD filter. The baseline vector accuracy on east, north and up (E, N, U) components with KF-RTSS method were improved by 62.8%, 63.6%, 62.5% on day of year 280 and 57.3%, 53.4%, 55.9% on day of year 281, respectively.

  2. GPS Attitude Determination Using Deployable-Mounted Antennas

    NASA Technical Reports Server (NTRS)

    Osborne, Michael L.; Tolson, Robert H.

    1996-01-01

    The primary objective of this investigation is to develop a method to solve for spacecraft attitude in the presence of potential incomplete antenna deployment. Most research on the use of the Global Positioning System (GPS) in attitude determination has assumed that the antenna baselines are known to less than 5 centimeters, or one quarter of the GPS signal wavelength. However, if the GPS antennas are mounted on a deployable fixture such as a solar panel, the actual antenna positions will not necessarily be within 5 cm of nominal. Incomplete antenna deployment could cause the baselines to be grossly in error, perhaps by as much as a meter. Overcoming this large uncertainty in order to accurately determine attitude is the focus of this study. To this end, a two-step solution method is proposed. The first step uses a least-squares estimate of the baselines to geometrically calculate the deployment angle errors of the solar panels. For the spacecraft under investigation, the first step determines the baselines to 3-4 cm with 4-8 minutes of data. A Kalman filter is then used to complete the attitude determination process, resulting in typical attitude errors of 0.50.

  3. Stochastic modeling for time series InSAR: with emphasis on atmospheric effects

    NASA Astrophysics Data System (ADS)

    Cao, Yunmeng; Li, Zhiwei; Wei, Jianchao; Hu, Jun; Duan, Meng; Feng, Guangcai

    2018-02-01

    Despite the many applications of time series interferometric synthetic aperture radar (TS-InSAR) techniques in geophysical problems, error analysis and assessment have been largely overlooked. Tropospheric propagation error is still the dominant error source of InSAR observations. However, the spatiotemporal variation of atmospheric effects is seldom considered in the present standard TS-InSAR techniques, such as persistent scatterer interferometry and small baseline subset interferometry. The failure to consider the stochastic properties of atmospheric effects not only affects the accuracy of the estimators, but also makes it difficult to assess the uncertainty of the final geophysical results. To address this issue, this paper proposes a network-based variance-covariance estimation method to model the spatiotemporal variation of tropospheric signals, and to estimate the temporal variance-covariance matrix of TS-InSAR observations. The constructed stochastic model is then incorporated into the TS-InSAR estimators both for parameters (e.g., deformation velocity, topography residual) estimation and uncertainty assessment. It is an incremental and positive improvement to the traditional weighted least squares methods to solve the multitemporal InSAR time series. The performance of the proposed method is validated by using both simulated and real datasets.

  4. Methods for estimation of radiation risk in epidemiological studies accounting for classical and Berkson errors in doses.

    PubMed

    Kukush, Alexander; Shklyar, Sergiy; Masiuk, Sergii; Likhtarov, Illya; Kovgan, Lina; Carroll, Raymond J; Bouville, Andre

    2011-02-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  5. Baseline Correction of Diffuse Reflection Near-Infrared Spectra Using Searching Region Standard Normal Variate (SRSNV).

    PubMed

    Genkawa, Takuma; Shinzawa, Hideyuki; Kato, Hideaki; Ishikawa, Daitaro; Murayama, Kodai; Komiyama, Makoto; Ozaki, Yukihiro

    2015-12-01

    An alternative baseline correction method for diffuse reflection near-infrared (NIR) spectra, searching region standard normal variate (SRSNV), was proposed. Standard normal variate (SNV) is an effective pretreatment method for baseline correction of diffuse reflection NIR spectra of powder and granular samples; however, its baseline correction performance depends on the NIR region used for SNV calculation. To search for an optimal NIR region for baseline correction using SNV, SRSNV employs moving window partial least squares regression (MWPLSR), and an optimal NIR region is identified based on the root mean square error (RMSE) of cross-validation of the partial least squares regression (PLSR) models with the first latent variable (LV). The performance of SRSNV was evaluated using diffuse reflection NIR spectra of mixture samples consisting of wheat flour and granular glucose (0-100% glucose at 5% intervals). From the obtained NIR spectra of the mixture in the 10 000-4000 cm(-1) region at 4 cm intervals (1501 spectral channels), a series of spectral windows consisting of 80 spectral channels was constructed, and then SNV spectra were calculated for each spectral window. Using these SNV spectra, a series of PLSR models with the first LV for glucose concentration was built. A plot of RMSE versus the spectral window position obtained using the PLSR models revealed that the 8680–8364 cm(-1) region was optimal for baseline correction using SNV. In the SNV spectra calculated using the 8680–8364 cm(-1) region (SRSNV spectra), a remarkable relative intensity change between a band due to wheat flour at 8500 cm(-1) and that due to glucose at 8364 cm(-1) was observed owing to successful baseline correction using SNV. A PLSR model with the first LV based on the SRSNV spectra yielded a determination coefficient (R2) of 0.999 and an RMSE of 0.70%, while a PLSR model with three LVs based on SNV spectra calculated in the full spectral region gave an R2 of 0.995 and an RMSE of 2.29%. Additional evaluation of SRSNV was carried out using diffuse reflection NIR spectra of marzipan and corn samples, and PLSR models based on SRSNV spectra showed good prediction results. These evaluation results indicate that SRSNV is effective in baseline correction of diffuse reflection NIR spectra and provides regression models with good prediction accuracy.

  6. Possibility of measuring Adler angles in charged current single pion neutrino-nucleus interactions

    NASA Astrophysics Data System (ADS)

    Sánchez, F.

    2016-05-01

    Uncertainties in modeling neutrino-nucleus interactions are a major contribution to systematic errors in long-baseline neutrino oscillation experiments. Accurate modeling of neutrino interactions requires additional experimental observables such as the Adler angles which carry information about the polarization of the Δ resonance and the interference with nonresonant single pion production. The Adler angles were measured with limited statistics in bubble chamber neutrino experiments as well as in electron-proton scattering experiments. We discuss the viability of measuring these angles in neutrino interactions with nuclei.

  7. Utility of an Occupational Therapy Driving Intervention for a Combat Veteran

    PubMed Central

    Monahan, Miriam; Canonizado, Maria; Winter, Sandra

    2014-01-01

    Many combat veterans are injured in motor vehicle crashes shortly after returning to civilian life, yet little evidence exists on effective driving interventions. In this single-subject design study, we compared clinical test results and driving errors in a returning combat veteran before and after an occupational therapy driving intervention. A certified driving rehabilitation specialist administered baseline clinical and simulated driving assessments; conducted three intervention sessions that discussed driving errors, retrained visual search skills, and invited commentary on driving; and administered a postintervention evaluation in conditions resembling those at baseline. Clinical test results were similar pre- and postintervention. Baseline versus postintervention driving errors were as follows: lane maintenance, 23 versus 7; vehicle positioning, 5 versus 1; signaling, 2 versus 0; speed regulation, 1 versus 1; visual scanning, 1 versus 0; and gap acceptance, 1 versus 0. Although the intervention appeared efficacious for this participant, threats to validity must be recognized and controlled for in a follow-up study. PMID:25005503

  8. Monitoring by forward scatter radar techniques: an improved second-order analytical model

    NASA Astrophysics Data System (ADS)

    Falconi, Marta Tecla; Comite, Davide; Galli, Alessandro; Marzano, Frank S.; Pastina, Debora; Lombardo, Pierfrancesco

    2017-10-01

    In this work, a second-order phase approximation is introduced to provide an improved analytical model of the signal received in forward scatter radar systems. A typical configuration with a rectangular metallic object illuminated while crossing the baseline, in far- or near-field conditions, is considered. An improved second-order model is compared with a simplified one already proposed by the authors and based on a paraxial approximation. A phase error analysis is carried out to investigate benefits and limitations of the second-order modeling. The results are validated by developing full-wave numerical simulations implementing the relevant scattering problem on a commercial tool.

  9. Atmospheric gradients from very long baseline interferometry observations

    NASA Technical Reports Server (NTRS)

    Macmillan, D. S.

    1995-01-01

    Azimuthal asymmetries in the atmospheric refractive index can lead to errors in estimated vertical and horizontal station coordinates. Daily average gradient effects can be as large as 50 mm of delay at a 7 deg elevation. To model gradients, the constrained estimation of gradient paramters was added to the standard VLBI solution procedure. Here the analysis of two sets of data is summarized: the set of all geodetic VLBI experiments from 1990-1993 and a series of 12 state-of-the-art R&D experiments run on consecutive days in January 1994. In both cases, when the gradient parameters are estimated, the overall fit of the geodetic solution is improved at greater than the 99% confidence level. Repeatabilities of baseline lengths ranging up to 11,000 km are improved by 1 to 8 mm in a root-sum-square sense. This varies from about 20% to 40% of the total baseline length scatter without gradient modeling for the 1990-1993 series and 40% to 50% for the January series. Gradients estimated independently for each day as a piecewise linear function are mostly continuous from day to day within their formal uncertainties.

  10. PLATFORM DEFORMATION PHASE CORRECTION FOR THE AMiBA-13 COPLANAR INTERFEROMETER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Yu-Wei; Lin, Kai-Yang; Huang, Yau-De

    2013-05-20

    We present a new way to solve the platform deformation problem of coplanar interferometers. The platform of a coplanar interferometer can be deformed due to driving forces and gravity. A deformed platform will induce extra components into the geometric delay of each baseline and change the phases of observed visibilities. The reconstructed images will also be diluted due to the errors of the phases. The platform deformations of The Yuan-Tseh Lee Array for Microwave Background Anisotropy (AMiBA) were modeled based on photogrammetry data with about 20 mount pointing positions. We then used the differential optical pointing error between two opticalmore » telescopes to fit the model parameters in the entire horizontal coordinate space. With the platform deformation model, we can predict the errors of the geometric phase delays due to platform deformation with a given azimuth and elevation of the targets and calibrators. After correcting the phases of the radio point sources in the AMiBA interferometric data, we recover 50%-70% flux loss due to phase errors. This allows us to restore more than 90% of a source flux. The method outlined in this work is not only applicable to the correction of deformation for other coplanar telescopes but also to single-dish telescopes with deformation problems. This work also forms the basis of the upcoming science results of AMiBA-13.« less

  11. Adaptive control: Myths and realities

    NASA Technical Reports Server (NTRS)

    Athans, M.; Valavani, L.

    1984-01-01

    It was found that all currently existing globally stable adaptive algorithms have three basic properties in common: positive realness of the error equation, square-integrability of the parameter adjustment law and, need for sufficient excitation for asymptotic parameter convergence. Of the three, the first property is of primary importance since it satisfies a sufficient condition for stabillity of the overall system, which is a baseline design objective. The second property has been instrumental in the proof of asymptotic error convergence to zero, while the third addresses the issue of parameter convergence. Positive-real error dynamics can be generated only if the relative degree (excess of poles over zeroes) of the process to be controlled is known exactly; this, in turn, implies perfect modeling. This and other assumptions, such as absence of nonminimum phase plant zeros on which the mathematical arguments are based, do not necessarily reflect properties of real systems. As a result, it is natural to inquire what happens to the designs under less than ideal assumptions. The issues arising from violation of the exact modeling assumption which is extremely restrictive in practice and impacts the most important system property, stability, are discussed.

  12. The impact of modelling errors on interferometer calibration for 21 cm power spectra

    NASA Astrophysics Data System (ADS)

    Ewall-Wice, Aaron; Dillon, Joshua S.; Liu, Adrian; Hewitt, Jacqueline

    2017-09-01

    We study the impact of sky-based calibration errors from source mismodelling on 21 cm power spectrum measurements with an interferometer and propose a method for suppressing their effects. While emission from faint sources that are not accounted for in calibration catalogues is believed to be spectrally smooth, deviations of true visibilities from model visibilities are not, due to the inherent chromaticity of the interferometer's sky response (the 'wedge'). Thus, unmodelled foregrounds, below the confusion limit of many instruments, introduce frequency structure into gain solutions on the same line-of-sight scales on which we hope to observe the cosmological signal. We derive analytic expressions describing these errors using linearized approximations of the calibration equations and estimate the impact of this bias on measurements of the 21 cm power spectrum during the epoch of reionization. Given our current precision in primary beam and foreground modelling, this noise will significantly impact the sensitivity of existing experiments that rely on sky-based calibration. Our formalism describes the scaling of calibration with array and sky-model parameters and can be used to guide future instrument design and calibration strategy. We find that sky-based calibration that downweights long baselines can eliminate contamination in most of the region outside of the wedge with only a modest increase in instrumental noise.

  13. Minimal entropy reconstructions of thermal images for emissivity correction

    NASA Astrophysics Data System (ADS)

    Allred, Lloyd G.

    1999-03-01

    Low emissivity with corresponding low thermal emission is a problem which has long afflicted infrared thermography. The problem is aggravated by reflected thermal energy which increases as the emissivity decreases, thus reducing the net signal-to-noise ratio, which degrades the resulting temperature reconstructions. Additional errors are introduced from the traditional emissivity-correction approaches, wherein one attempts to correct for emissivity either using thermocouples or using one or more baseline images, collected at known temperatures. These corrections are numerically equivalent to image differencing. Errors in the baseline images are therefore additive, causing the resulting measurement error to either double or triple. The practical application of thermal imagery usually entails coating the objective surface to increase the emissivity to a uniform and repeatable value. While the author recommends that the thermographer still adhere to this practice, he has devised a minimal entropy reconstructions which not only correct for emissivity variations, but also corrects for variations in sensor response, using the baseline images at known temperatures to correct for these values. The minimal energy reconstruction is actually based on a modified Hopfield neural network which finds the resulting image which best explains the observed data and baseline data, having minimal entropy change between adjacent pixels. The autocorrelation of temperatures between adjacent pixels is a feature of most close-up thermal images. A surprising result from transient heating data indicates that the resulting corrected thermal images have less measurement error and are closer to the situational truth than the original data.

  14. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Space shuttle navigation analysis. Volume 2: Baseline system navigation

    NASA Technical Reports Server (NTRS)

    Jones, H. L.; Luders, G.; Matchett, G. A.; Rains, R. G.

    1980-01-01

    Studies related to the baseline navigation system for the orbiter are presented. The baseline navigation system studies include a covariance analysis of the Inertial Measurement Unit calibration and alignment procedures, postflight IMU error recovery for the approach and landing phases, on-orbit calibration of IMU instrument biases, and a covariance analysis of entry and prelaunch navigation system performance.

  16. Regression dilution in the proportional hazards model.

    PubMed

    Hughes, M D

    1993-12-01

    The problem of regression dilution arising from covariate measurement error is investigated for survival data using the proportional hazards model. The naive approach to parameter estimation is considered whereby observed covariate values are used, inappropriately, in the usual analysis instead of the underlying covariate values. A relationship between the estimated parameter in large samples and the true parameter is obtained showing that the bias does not depend on the form of the baseline hazard function when the errors are normally distributed. With high censorship, adjustment of the naive estimate by the factor 1 + lambda, where lambda is the ratio of within-person variability about an underlying mean level to the variability of these levels in the population sampled, removes the bias. As censorship increases, the adjustment required increases and when there is no censorship is markedly higher than 1 + lambda and depends also on the true risk relationship.

  17. Using lean to improve medication administration safety: in search of the "perfect dose".

    PubMed

    Ching, Joan M; Long, Christina; Williams, Barbara L; Blackmore, C Craig

    2013-05-01

    At Virginia Mason Medical Center (Seattle), the Collaborative Alliance for Nursing Outcomes (CALNOC) Medication Administration Accuracy Quality Study was used in combination with Lean quality improvement efforts to address medication administration safety. Lean interventions were targeted at improving the medication room layout, applying visual controls, and implementing nursing standard work. The interventions were designed to prevent medication administration errors through improving six safe practices: (1) comparing medication with medication administration record, (2) labeling medication, (3) checking two forms of patient identification, (4) explaining medication to patient, (5) charting medication immediately, and (6) protecting the process from distractions/interruptions. Trained nurse auditors observed 9,244 doses for 2,139 patients. Following the intervention, the number of safe-practice violations decreased from 83 violations/100 doses at baseline (January 2010-March 2010) to 42 violations/100 doses at final follow-up (July 2011-September 2011), resulting in an absolute risk reduction of 42 violations/100 doses (95% confidence interval [CI]: 35-48), p < .001). The number of medication administration errors decreased from 10.3 errors/100 doses at baseline to 2.8 errors/100 doses at final follow-up (absolute risk reduction: 7 violations/100 doses [95% CI: 5-10, p < .001]). The "perfect dose" score, reflecting compliance with all six safe practices and absence of any of the eight medication administration errors, improved from 37 in compliance/100 doses at baseline to 68 in compliance/100 doses at the final follow-up. Lean process improvements coupled with direct observation can contribute to substantial decreases in errors in nursing medication administration.

  18. The effect of observing novice and expert performance on acquisition of surgical skills on a robotic platform

    PubMed Central

    Harris, David J.; Vine, Samuel J.; Wilson, Mark R.; McGrath, John S.; LeBel, Marie-Eve

    2017-01-01

    Background Observational learning plays an important role in surgical skills training, following the traditional model of learning from expertise. Recent findings have, however, highlighted the benefit of observing not only expert performance but also error-strewn performance. The aim of this study was to determine which model (novice vs. expert) would lead to the greatest benefits when learning robotically assisted surgical skills. Methods 120 medical students with no prior experience of robotically-assisted surgery completed a ring-carrying training task on three occasions; baseline, post-intervention and at one-week follow-up. The observation intervention consisted of a video model performing the ring-carrying task, with participants randomly assigned to view an expert model, a novice model, a mixed expert/novice model or no observation (control group). Participants were assessed for task performance and surgical instrument control. Results There were significant group differences post-intervention, with expert and novice observation groups outperforming the control group, but there were no clear group differences at a retention test one week later. There was no difference in performance between the expert-observing and error-observing groups. Conclusions Similar benefits were found when observing the traditional expert model or the error-strewn model, suggesting that viewing poor performance may be as beneficial as viewing expertise in the early acquisition of robotic surgical skills. Further work is required to understand, then inform, the optimal curriculum design when utilising observational learning in surgical training. PMID:29141046

  19. An accuracy assessment of Magellan Very Long Baseline Interferometry (VLBI)

    NASA Technical Reports Server (NTRS)

    Engelhardt, D. B.; Kronschnabl, G. R.; Border, J. S.

    1990-01-01

    Very Long Baseline Interferometry (VLBI) measurements of the Magellan spacecraft's angular position and velocity were made during July through September, 1989, during the spacecraft's heliocentric flight to Venus. The purpose of this data acquisition and reduction was to verify this data type for operational use before Magellan is inserted into Venus orbit, in August, 1990. The accuracy of these measurements are shown to be within 20 nanoradians in angular position, and within 5 picoradians/sec in angular velocity. The media effects and their calibrations are quantified; the wet fluctuating troposphere is the dominant source of measurement error for angular velocity. The charged particle effect is completely calibrated with S- and X-Band dual-frequency calibrations. Increasing the accuracy of the Earth platform model parameters, by using VLBI-derived tracking station locations consistent with the planetary ephemeris frame, and by including high frequency Earth tidal terms in the Earth rotation model, add a few nanoradians improvement to the angular position measurements. Angular velocity measurements were insensitive to these Earth platform modelling improvements.

  20. Modeling Individual Cyclic Variation in Human Behavior.

    PubMed

    Pierson, Emma; Althoff, Tim; Leskovec, Jure

    2018-04-01

    Cycles are fundamental to human health and behavior. Examples include mood cycles, circadian rhythms, and the menstrual cycle. However, modeling cycles in time series data is challenging because in most cases the cycles are not labeled or directly observed and need to be inferred from multidimensional measurements taken over time. Here, we present Cyclic Hidden Markov Models (CyH-MMs) for detecting and modeling cycles in a collection of multidimensional heterogeneous time series data. In contrast to previous cycle modeling methods, CyHMMs deal with a number of challenges encountered in modeling real-world cycles: they can model multivariate data with both discrete and continuous dimensions; they explicitly model and are robust to missing data; and they can share information across individuals to accommodate variation both within and between individual time series. Experiments on synthetic and real-world health-tracking data demonstrate that CyHMMs infer cycle lengths more accurately than existing methods, with 58% lower error on simulated data and 63% lower error on real-world data compared to the best-performing baseline. CyHMMs can also perform functions which baselines cannot: they can model the progression of individual features/symptoms over the course of the cycle, identify the most variable features, and cluster individual time series into groups with distinct characteristics. Applying CyHMMs to two real-world health-tracking datasets-of human menstrual cycle symptoms and physical activity tracking data-yields important insights including which symptoms to expect at each point during the cycle. We also find that people fall into several groups with distinct cycle patterns, and that these groups differ along dimensions not provided to the model. For example, by modeling missing data in the menstrual cycles dataset, we are able to discover a medically relevant group of birth control users even though information on birth control is not given to the model.

  1. Modeling Individual Cyclic Variation in Human Behavior

    PubMed Central

    Pierson, Emma; Althoff, Tim; Leskovec, Jure

    2018-01-01

    Cycles are fundamental to human health and behavior. Examples include mood cycles, circadian rhythms, and the menstrual cycle. However, modeling cycles in time series data is challenging because in most cases the cycles are not labeled or directly observed and need to be inferred from multidimensional measurements taken over time. Here, we present Cyclic Hidden Markov Models (CyH-MMs) for detecting and modeling cycles in a collection of multidimensional heterogeneous time series data. In contrast to previous cycle modeling methods, CyHMMs deal with a number of challenges encountered in modeling real-world cycles: they can model multivariate data with both discrete and continuous dimensions; they explicitly model and are robust to missing data; and they can share information across individuals to accommodate variation both within and between individual time series. Experiments on synthetic and real-world health-tracking data demonstrate that CyHMMs infer cycle lengths more accurately than existing methods, with 58% lower error on simulated data and 63% lower error on real-world data compared to the best-performing baseline. CyHMMs can also perform functions which baselines cannot: they can model the progression of individual features/symptoms over the course of the cycle, identify the most variable features, and cluster individual time series into groups with distinct characteristics. Applying CyHMMs to two real-world health-tracking datasets—of human menstrual cycle symptoms and physical activity tracking data—yields important insights including which symptoms to expect at each point during the cycle. We also find that people fall into several groups with distinct cycle patterns, and that these groups differ along dimensions not provided to the model. For example, by modeling missing data in the menstrual cycles dataset, we are able to discover a medically relevant group of birth control users even though information on birth control is not given to the model. PMID:29780976

  2. A general framework for parametric survival analysis.

    PubMed

    Crowther, Michael J; Lambert, Paul C

    2014-12-30

    Parametric survival models are being increasingly used as an alternative to the Cox model in biomedical research. Through direct modelling of the baseline hazard function, we can gain greater understanding of the risk profile of patients over time, obtaining absolute measures of risk. Commonly used parametric survival models, such as the Weibull, make restrictive assumptions of the baseline hazard function, such as monotonicity, which is often violated in clinical datasets. In this article, we extend the general framework of parametric survival models proposed by Crowther and Lambert (Journal of Statistical Software 53:12, 2013), to incorporate relative survival, and robust and cluster robust standard errors. We describe the general framework through three applications to clinical datasets, in particular, illustrating the use of restricted cubic splines, modelled on the log hazard scale, to provide a highly flexible survival modelling framework. Through the use of restricted cubic splines, we can derive the cumulative hazard function analytically beyond the boundary knots, resulting in a combined analytic/numerical approach, which substantially improves the estimation process compared with only using numerical integration. User-friendly Stata software is provided, which significantly extends parametric survival models available in standard software. Copyright © 2014 John Wiley & Sons, Ltd.

  3. Generation of Simulated Tracking Data for LADEE Operational Readiness Testing

    NASA Technical Reports Server (NTRS)

    Woodburn, James; Policastri, Lisa; Owens, Brandon

    2015-01-01

    Operational Readiness Tests were an important part of the pre-launch preparation for the LADEE mission. The generation of simulated tracking data to stress the Flight Dynamics System and the Flight Dynamics Team was important for satisfying the testing goal of demonstrating that the software and the team were ready to fly the operational mission. The simulated tracking was generated in a manner to incorporate the effects of errors in the baseline dynamical model, errors in maneuver execution and phenomenology associated with various tracking system based components. The ability of the mission team to overcome these challenges in a realistic flight dynamics scenario indicated that the team and flight dynamics system were ready to fly the LADEE mission. Lunar Atmosphere and Dust Environment.

  4. Quantitative modeling of the accuracy in registering preoperative patient-specific anatomic models into left atrial cardiac ablation procedures

    PubMed Central

    Rettmann, Maryam E.; Holmes, David R.; Kwartowitz, David M.; Gunawan, Mia; Johnson, Susan B.; Camp, Jon J.; Cameron, Bruce M.; Dalegrave, Charles; Kolasa, Mark W.; Packer, Douglas L.; Robb, Richard A.

    2014-01-01

    Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamic in vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved landmark-only registration provided the noise in the surface points is not excessively high. Increased variability on the landmark fiducials resulted in increased registration errors; however, refinement of the initial landmark registration by the surface-based algorithm can compensate for small initial misalignments. The surface-based registration algorithm is quite robust to noise on the surface points and continues to improve landmark registration even at high levels of noise on the surface points. Both the canine and patient studies also demonstrate that combined landmark and surface registration has lower errors than landmark registration alone. Conclusions: In this work, we describe a model for evaluating the impact of noise variability on the input parameters of a registration algorithm in the context of cardiac ablation therapy. The model can be used to predict both registration error as well as assess which inputs have the largest effect on registration accuracy. PMID:24506630

  5. Quantitative modeling of the accuracy in registering preoperative patient-specific anatomic models into left atrial cardiac ablation procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rettmann, Maryam E., E-mail: rettmann.maryam@mayo.edu; Holmes, David R.; Camp, Jon J.

    2014-02-15

    Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Datamore » from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamicin vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved landmark-only registration provided the noise in the surface points is not excessively high. Increased variability on the landmark fiducials resulted in increased registration errors; however, refinement of the initial landmark registration by the surface-based algorithm can compensate for small initial misalignments. The surface-based registration algorithm is quite robust to noise on the surface points and continues to improve landmark registration even at high levels of noise on the surface points. Both the canine and patient studies also demonstrate that combined landmark and surface registration has lower errors than landmark registration alone. Conclusions: In this work, we describe a model for evaluating the impact of noise variability on the input parameters of a registration algorithm in the context of cardiac ablation therapy. The model can be used to predict both registration error as well as assess which inputs have the largest effect on registration accuracy.« less

  6. Informed baseline subtraction of proteomic mass spectrometry data aided by a novel sliding window algorithm.

    PubMed

    Stanford, Tyman E; Bagley, Christopher J; Solomon, Patty J

    2016-01-01

    Proteomic matrix-assisted laser desorption/ionisation (MALDI) linear time-of-flight (TOF) mass spectrometry (MS) may be used to produce protein profiles from biological samples with the aim of discovering biomarkers for disease. However, the raw protein profiles suffer from several sources of bias or systematic variation which need to be removed via pre-processing before meaningful downstream analysis of the data can be undertaken. Baseline subtraction, an early pre-processing step that removes the non-peptide signal from the spectra, is complicated by the following: (i) each spectrum has, on average, wider peaks for peptides with higher mass-to-charge ratios ( m / z ), and (ii) the time-consuming and error-prone trial-and-error process for optimising the baseline subtraction input arguments. With reference to the aforementioned complications, we present an automated pipeline that includes (i) a novel 'continuous' line segment algorithm that efficiently operates over data with a transformed m / z -axis to remove the relationship between peptide mass and peak width, and (ii) an input-free algorithm to estimate peak widths on the transformed m / z scale. The automated baseline subtraction method was deployed on six publicly available proteomic MS datasets using six different m/z-axis transformations. Optimality of the automated baseline subtraction pipeline was assessed quantitatively using the mean absolute scaled error (MASE) when compared to a gold-standard baseline subtracted signal. Several of the transformations investigated were able to reduce, if not entirely remove, the peak width and peak location relationship resulting in near-optimal baseline subtraction using the automated pipeline. The proposed novel 'continuous' line segment algorithm is shown to far outperform naive sliding window algorithms with regard to the computational time required. The improvement in computational time was at least four-fold on real MALDI TOF-MS data and at least an order of magnitude on many simulated datasets. The advantages of the proposed pipeline include informed and data specific input arguments for baseline subtraction methods, the avoidance of time-intensive and subjective piecewise baseline subtraction, and the ability to automate baseline subtraction completely. Moreover, individual steps can be adopted as stand-alone routines.

  7. Moon-Based INSAR Geolocation and Baseline Analysis

    NASA Astrophysics Data System (ADS)

    Liu, Guang; Ren, Yuanzhen; Ye, Hanlin; Guo, Huadong; Ding, Yixing; Ruan, Zhixing; Lv, Mingyang; Dou, Changyong; Chen, Zhaoning

    2016-07-01

    Earth observation platform is a host, the characteristics of the platform in some extent determines the ability for earth observation. Currently most developing platforms are satellite, in contrast carry out systematic observations with moon based Earth observation platform is still a new concept. The Moon is Earth's only natural satellite and is the only one which human has reached, it will give people different perspectives when observe the earth with sensors from the moon. Moon-based InSAR (SAR Interferometry), one of the important earth observation technology, has all-day, all-weather observation ability, but its uniqueness is still a need for analysis. This article will discuss key issues of geometric positioning and baseline parameters of moon-based InSAR. Based on the ephemeris data, the position, liberation and attitude of earth and moon will be obtained, and the position of the moon-base SAR sensor can be obtained by coordinate transformation from fixed seleno-centric coordinate systems to terrestrial coordinate systems, together with the Distance-Doppler equation, the positioning model will be analyzed; after establish of moon-based InSAR baseline equation, the different baseline error will be analyzed, the influence of the moon-based InSAR baseline to earth observation application will be obtained.

  8. Clinical time series prediction: towards a hierarchical dynamical system framework

    PubMed Central

    Liu, Zitao; Hauskrecht, Milos

    2014-01-01

    Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. PMID:25534671

  9. Top-of-Climb Matching Method for Reducing Aircraft Trajectory Prediction Errors.

    PubMed

    Thipphavong, David P

    2016-09-01

    The inaccuracies of the aircraft performance models utilized by trajectory predictors with regard to takeoff weight, thrust, climb profile, and other parameters result in altitude errors during the climb phase that often exceed the vertical separation standard of 1000 feet. This study investigates the potential reduction in altitude trajectory prediction errors that could be achieved for climbing flights if just one additional parameter is made available: top-of-climb (TOC) time. The TOC-matching method developed and evaluated in this paper is straightforward: a set of candidate trajectory predictions is generated using different aircraft weight parameters, and the one that most closely matches TOC in terms of time is selected. This algorithm was tested using more than 1000 climbing flights in Fort Worth Center. Compared to the baseline trajectory predictions of a real-time research prototype (Center/TRACON Automation System), the TOC-matching method reduced the altitude root mean square error (RMSE) for a 5-minute prediction time by 38%. It also decreased the percentage of flights with absolute altitude error greater than the vertical separation standard of 1000 ft for the same look-ahead time from 55% to 30%.

  10. Top-of-Climb Matching Method for Reducing Aircraft Trajectory Prediction Errors

    PubMed Central

    Thipphavong, David P.

    2017-01-01

    The inaccuracies of the aircraft performance models utilized by trajectory predictors with regard to takeoff weight, thrust, climb profile, and other parameters result in altitude errors during the climb phase that often exceed the vertical separation standard of 1000 feet. This study investigates the potential reduction in altitude trajectory prediction errors that could be achieved for climbing flights if just one additional parameter is made available: top-of-climb (TOC) time. The TOC-matching method developed and evaluated in this paper is straightforward: a set of candidate trajectory predictions is generated using different aircraft weight parameters, and the one that most closely matches TOC in terms of time is selected. This algorithm was tested using more than 1000 climbing flights in Fort Worth Center. Compared to the baseline trajectory predictions of a real-time research prototype (Center/TRACON Automation System), the TOC-matching method reduced the altitude root mean square error (RMSE) for a 5-minute prediction time by 38%. It also decreased the percentage of flights with absolute altitude error greater than the vertical separation standard of 1000 ft for the same look-ahead time from 55% to 30%. PMID:28684883

  11. Top-of-Climb Matching Method for Reducing Aircraft Trajectory Prediction Errors

    NASA Technical Reports Server (NTRS)

    Thipphavong, David P.

    2016-01-01

    The inaccuracies of the aircraft performance models utilized by trajectory predictors with regard to takeoff weight, thrust, climb profile, and other parameters result in altitude errors during the climb phase that often exceed the vertical separation standard of 1000 feet. This study investigates the potential reduction in altitude trajectory prediction errors that could be achieved for climbing flights if just one additional parameter is made available: top-of-climb (TOC) time. The TOC-matching method developed and evaluated in this paper is straightforward: a set of candidate trajectory predictions is generated using different aircraft weight parameters, and the one that most closely matches TOC in terms of time is selected. This algorithm was tested using more than 1000 climbing flights in Fort Worth Center. Compared to the baseline trajectory predictions of a real-time research prototype (Center/TRACON Automation System), the TOC-matching method reduced the altitude root mean square error (RMSE) for a 5-minute prediction time by 38%. It also decreased the percentage of flights with absolute altitude error greater than the vertical separation standard of 1000 ft for the same look-ahead time from 55% to 30%.

  12. Evaluation of malaria rapid diagnostic test (RDT) use by community health workers: a longitudinal study in western Kenya.

    PubMed

    Boyce, Matthew R; Menya, Diana; Turner, Elizabeth L; Laktabai, Jeremiah; Prudhomme-O'Meara, Wendy

    2018-05-18

    Malaria rapid diagnostic tests (RDTs) are a simple, point-of-care technology that can improve the diagnosis and subsequent treatment of malaria. They are an increasingly common diagnostic tool, but concerns remain about their use by community health workers (CHWs). These concerns regard the long-term trends relating to infection prevention measures, the interpretation of test results and adherence to treatment protocols. This study assessed whether CHWs maintained their competency at conducting RDTs over a 12-month timeframe, and if this competency varied with specific CHW characteristics. From June to September, 2015, CHWs (n = 271) were trained to conduct RDTs using a 3-day validated curriculum and a baseline assessment was completed. Between June and August, 2016, CHWs (n = 105) were randomly selected and recruited for follow-up assessments using a 20-step checklist that classified steps as relating to safety, accuracy, and treatment; 103 CHWs participated in follow-up assessments. Poisson regressions were used to test for associations between error count data at follow-up and Poisson regression models fit using generalized estimating equations were used to compare data across time-points. At both baseline and follow-up observations, at least 80% of CHWs correctly completed 17 of the 20 steps. CHWs being 50 years of age or older was associated with increased total errors and safety errors at baseline and follow-up. At follow-up, prior experience conducting RDTs was associated with fewer errors. Performance, as it related to the correct completion of all checklist steps and safety steps, did not decline over the 12 months and performance of accuracy steps improved (mean error ratio: 0.51; 95% CI 0.40-0.63). Visual interpretation of RDT results yielded a CHW sensitivity of 92.0% and a specificity of 97.3% when compared to interpretation by the research team. None of the characteristics investigated was found to be significantly associated with RDT interpretation. With training, most CHWs performing RDTs maintain diagnostic testing competency over at least 12 months. CHWs generally perform RDTs safely and accurately interpret results. Younger age and prior experiences with RDTs were associated with better testing performance. Future research should investigate the mode by which CHW characteristics impact RDT procedures.

  13. Errors of Omission and Commission during Alternative Reinforcement of Compliance: The Effects of Varying Levels of Treatment Integrity

    ERIC Educational Resources Information Center

    Leon, Yanerys; Wilder, David A.; Majdalany, Lina; Myers, Kristin; Saini, Valdeep

    2014-01-01

    We conducted two experiments to evaluate the effects of errors of omission and commission during alternative reinforcement of compliance in young children. In Experiment 1, we evaluated errors of omission by examining two levels of integrity during alternative reinforcement (20 and 60%) for child compliance following no treatment (baseline) versus…

  14. Measuring rapid ocean tidal earth orientation variations with very long baseline interferometry

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Jacobs, C. S.; Gross, R. S.

    1993-01-01

    Ocean tidal effects on universal time and polar motion (UTPM) are investigated at four nearly diurnal (K(sub 1), P(sub 1), O(sub 1), and Q(sub 1)) and four nearly semidiurnal (K(sub 2), S(sub 2), M(sub 2), and N(sub 2)) frequencies by analyzing very long baseline interferometry (VLBI) data extending from 1978 to 1992. We discuss limitations of comparisons between experiment and theory for the retograde nearly diurnal polar motion components due to their degeneracy with prograde components of the nutation model. Estimating amplitudes of contributions to the modeled VLBI observables at these eight frequencies produces a statistically highly significant improvement of 7 mm to the residuals of a fit to the observed delays. Use of such an improved UTPM model also reduces the 14-30 mm scatter of baseline lengths about a time-linear model of tectonic motion by 3-14 mm, also withhigh significance levels. A total of 28 UTPM ocean tidal amplitudes can be unambiguously estimated from the data, with resulting UTI and PM magnitudes as large as 21 micro secs and 270 microarc seconds and formal uncertainties of the order of 0.3 micro secs and 5 microarc secs for UTI and PM, respectively. Empirically determined UTPM amplitudes and phases are com1pared to values calculated theoretically by Gross from Seiler's global ocean tide model. The discrepancy between theory and experiment is larger by a factor of 3 for UTI amplitudes (9 micro secs) than for prograde PM amplitudes (42 microarc secs). The 14-year VLBI data span strongly attenuates the influence of mismodeled effects on estimated UTPM amplitudes and phases that are not coherent with the eight frequencies of interest. Magnitudes of coherent and quasi-coherent systematic errors are quantified by means of internal consistency tests. We conclude that coherent systematic effects are many times larger than the formal uncertainties and can be as large as 4 micro secs for UTI and 60 microarc secs for polar motion. On the basis of such ealistic error estimates, 22 of the 31 fitted UTPM ocean tidal amplitudes differ from zero by more than 2 sigma.

  15. Measuring rapid ocean tidal earth orientation variations with very long baseline interferometry

    NASA Astrophysics Data System (ADS)

    Sovers, O. J.; Jacobs, C. S.; Gross, R. S.

    1993-11-01

    Ocean tidal effects on universal time and polar motion (UTPM) are investigated at four nearly diurnal (K1, P1, O1, and Q1) and four nearly semidiurnal (K2, S2, M2, and N2) frequencies by analyzing very long baseline interferometry (VLBI) data extending from 1978 to 1992. We discuss limitations of comparisons between experiment and theory for the retrograde nearly diurnal polar motion components due to their degeneracy with prograde components of the nutation model. Estimating amplitudes of contributions to the modeled VLBI observables at these eight frequencies produces a statistically highly significant improvement of 7 mm to the residuals of a fit to the observed delays. Use of such an improved UTPM model also reduces the 14-30 mm scatter of baseline lengths about a time-linear model of tectonic motion by 3-14 mm, also with high significance levels. A total of 28 UTPM ocean tidal amplitudes can be unambiguously estimated from the data, with resulting UT1 and PM magnitudes as large as 21 μs and 270 microarc seconds (μas) and formal uncertainties of the order of 0.3 μs and 5 μas for UTI and PM, respectively. Empirically determined UTPM amplitudes and phases are compared to values calculated theoretically by Gross from Seiler's global ocean tide model. The discrepancy between theory and experiment is larger by a factor of 3 for UT1 amplitudes (9 μs) than for prograde PM amplitudes (42 μas). The 14-year VLBI data span strongly attenuates the influence of mismodeled effects on estimated UTPM amplitudes and phases that are not coherent with the eight frequencies of interest. Magnitudes of coherent and quasi-coherent systematic errors are quantified by means of internal consistency tests. We conclude that coherent systematic effects are many times larger than the formal uncertainties and can be as large as 4 μs for UT1 and 60 μas for polar motion. On the basis of such realistic error estimates, 22 of the 31 fitted UTPM ocean tidal amplitudes differ from zero by more than 2σ.

  16. Vision-Enhancing Interventions in Nursing Home Residents and Their Short-Term Impact on Physical and Cognitive Function

    PubMed Central

    Elliott, Amanda F.; McGwin, Gerald; Owsley, Cynthia

    2009-01-01

    OBJECTIVE To evaluate the effect of vision-enhancing interventions (i.e., cataract surgery or refractive error correction) on physical function and cognitive status in nursing home residents. DESIGN Longitudinal cohort study. SETTING Seventeen nursing homes in Birmingham, AL. PARTICIPANTS A total of 187 English-speaking older adults (>55 years of age). INTERVENTION Participants took part in one of two vision-enhancing interventions: cataract surgery or refractive error correction. Each group was compared against a control group (persons eligible for but who declined cataract surgery, or who received delayed correction of refractive error). MEASUREMENTS Physical function (i.e., ability to perform activities of daily living and mobility) was assessed with a series of self-report and certified nursing assistant ratings at baseline and at 2 months for the refractive error correction group, and at 4 months for the cataract surgery group. The Mini Mental State Exam was also administered. RESULTS No significant differences existed within or between groups from baseline to follow-up on any of the measures of physical function. Mental status scores significantly declined from baseline to follow-up for both the immediate (p= 0.05) and delayed (p< 0.02) refractive error correction groups and for the cataract surgery control group (p= 0.05). CONCLUSION Vision-enhancing interventions did not lead to short-term improvements in physical functioning or cognitive status in this sample of elderly nursing home residents. PMID:19170783

  17. In-flight measurement of the National Oceanic and Atmospheric Administration (NOAA)-10 static Earth sensor error

    NASA Technical Reports Server (NTRS)

    Harvie, E.; Filla, O.; Baker, D.

    1993-01-01

    Analysis performed in the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) measures error in the static Earth sensor onboard the National Oceanic and Atmospheric Administration (NOAA)-10 spacecraft using flight data. Errors are computed as the difference between Earth sensor pitch and roll angle telemetry and reference pitch and roll attitude histories propagated by gyros. The flight data error determination illustrates the effect on horizon sensing of systemic variation in the Earth infrared (IR) horizon radiance with latitude and season, as well as the effect of anomalies in the global IR radiance. Results of the analysis provide a comparison between static Earth sensor flight performance and that of scanning Earth sensors studied previously in the GSFC/FDD. The results also provide a baseline for evaluating various models of the static Earth sensor. Representative days from the NOAA-10 mission indicate the extent of uniformity and consistency over time of the global IR horizon. A unique aspect of the NOAA-10 analysis is the correlation of flight data errors with independent radiometric measurements of stratospheric temperature. The determination of the NOAA-10 static Earth sensor error contributes to realistic performance expectations for missions to be equipped with similar sensors.

  18. Retrospective cost adaptive Reynolds-averaged Navier-Stokes k-ω model for data-driven unsteady turbulent simulations

    NASA Astrophysics Data System (ADS)

    Li, Zhiyong; Hoagg, Jesse B.; Martin, Alexandre; Bailey, Sean C. C.

    2018-03-01

    This paper presents a data-driven computational model for simulating unsteady turbulent flows, where sparse measurement data is available. The model uses the retrospective cost adaptation (RCA) algorithm to automatically adjust the closure coefficients of the Reynolds-averaged Navier-Stokes (RANS) k- ω turbulence equations to improve agreement between the simulated flow and the measurements. The RCA-RANS k- ω model is verified for steady flow using a pipe-flow test case and for unsteady flow using a surface-mounted-cube test case. Measurements used for adaptation of the verification cases are obtained from baseline simulations with known closure coefficients. These verification test cases demonstrate that the RCA-RANS k- ω model can successfully adapt the closure coefficients to improve agreement between the simulated flow field and a set of sparse flow-field measurements. Furthermore, the RCA-RANS k- ω model improves agreement between the simulated flow and the baseline flow at locations at which measurements do not exist. The RCA-RANS k- ω model is also validated with experimental data from 2 test cases: steady pipe flow, and unsteady flow past a square cylinder. In both test cases, the adaptation improves agreement with experimental data in comparison to the results from a non-adaptive RANS k- ω model that uses the standard values of the k- ω closure coefficients. For the steady pipe flow, adaptation is driven by mean stream-wise velocity measurements at 24 locations along the pipe radius. The RCA-RANS k- ω model reduces the average velocity error at these locations by over 35%. For the unsteady flow over a square cylinder, adaptation is driven by time-varying surface pressure measurements at 2 locations on the square cylinder. The RCA-RANS k- ω model reduces the average surface-pressure error at these locations by 88.8%.

  19. Two-UAV Intersection Localization System Based on the Airborne Optoelectronic Platform

    PubMed Central

    Bai, Guanbing; Liu, Jinghong; Song, Yueming; Zuo, Yujia

    2017-01-01

    To address the limitation of the existing UAV (unmanned aerial vehicles) photoelectric localization method used for moving objects, this paper proposes an improved two-UAV intersection localization system based on airborne optoelectronic platforms by using the crossed-angle localization method of photoelectric theodolites for reference. This paper introduces the makeup and operating principle of intersection localization system, creates auxiliary coordinate systems, transforms the LOS (line of sight, from the UAV to the target) vectors into homogeneous coordinates, and establishes a two-UAV intersection localization model. In this paper, the influence of the positional relationship between UAVs and the target on localization accuracy has been studied in detail to obtain an ideal measuring position and the optimal localization position where the optimal intersection angle is 72.6318°. The result shows that, given the optimal position, the localization root mean square error (RMS) will be 25.0235 m when the target is 5 km away from UAV baselines. Finally, the influence of modified adaptive Kalman filtering on localization results is analyzed, and an appropriate filtering model is established to reduce the localization RMS error to 15.7983 m. Finally, An outfield experiment was carried out and obtained the optimal results: σB=1.63×10−4 (°), σL=1.35×10−4 (°), σH=15.8 (m), σsum=27.6 (m), where σB represents the longitude error, σL represents the latitude error, σH represents the altitude error, and σsum represents the error radius. PMID:28067814

  20. Two-UAV Intersection Localization System Based on the Airborne Optoelectronic Platform.

    PubMed

    Bai, Guanbing; Liu, Jinghong; Song, Yueming; Zuo, Yujia

    2017-01-06

    To address the limitation of the existing UAV (unmanned aerial vehicles) photoelectric localization method used for moving objects, this paper proposes an improved two-UAV intersection localization system based on airborne optoelectronic platforms by using the crossed-angle localization method of photoelectric theodolites for reference. This paper introduces the makeup and operating principle of intersection localization system, creates auxiliary coordinate systems, transforms the LOS (line of sight, from the UAV to the target) vectors into homogeneous coordinates, and establishes a two-UAV intersection localization model. In this paper, the influence of the positional relationship between UAVs and the target on localization accuracy has been studied in detail to obtain an ideal measuring position and the optimal localization position where the optimal intersection angle is 72.6318°. The result shows that, given the optimal position, the localization root mean square error (RMS) will be 25.0235 m when the target is 5 km away from UAV baselines. Finally, the influence of modified adaptive Kalman filtering on localization results is analyzed, and an appropriate filtering model is established to reduce the localization RMS error to 15.7983 m. Finally, An outfield experiment was carried out and obtained the optimal results: σ B = 1.63 × 10 - 4 ( ° ) , σ L = 1.35 × 10 - 4 ( ° ) , σ H = 15.8 ( m ) , σ s u m = 27.6 ( m ) , where σ B represents the longitude error, σ L represents the latitude error, σ H represents the altitude error, and σ s u m represents the error radius.

  1. Assessing the learning curve for the acquisition of laparoscopic skills on a virtual reality simulator.

    PubMed

    Sherman, V; Feldman, L S; Stanbridge, D; Kazmi, R; Fried, G M

    2005-05-01

    The aim of this study was to develop summary metrics and assess the construct validity for a virtual reality laparoscopic simulator (LapSim) by comparing the learning curves of three groups with different levels of laparoscopic expertise. Three groups of subjects ('expert', 'junior', and 'naïve') underwent repeated trials on three LapSim tasks. Formulas were developed to calculate scores for efficiency ('time-error') and economy of 'motion' ('motion') using metrics generated by the software after each drill. Data (mean +/- SD) were evaluated by analysis of variance (ANOVA). Significance was set at p < 0.05. All three groups improved significantly from baseline to final for both 'time-error' and 'motion' scores. There were significant differences between groups in time error performances at baseline and final, due to higher scores in the 'expert' group. A significant difference in 'motion' scores was seen only at baseline. We have developed summary metrics for the LapSim that differentiate among levels of laparoscopic experience. This study also provides evidence of construct validity for the LapSim.

  2. A comparison of video modeling, text-based instruction, and no instruction for creating multiple baseline graphs in Microsoft Excel.

    PubMed

    Tyner, Bryan C; Fienup, Daniel M

    2015-09-01

    Graphing is socially significant for behavior analysts; however, graphing can be difficult to learn. Video modeling (VM) may be a useful instructional method but lacks evidence for effective teaching of computer skills. A between-groups design compared the effects of VM, text-based instruction, and no instruction on graphing performance. Participants who used VM constructed graphs significantly faster and with fewer errors than those who used text-based instruction or no instruction. Implications for instruction are discussed. © Society for the Experimental Analysis of Behavior.

  3. DEM corrections on series of wrapped interferograms as a tool to improve deformation monitoring around Siling Co lake in Tibet.

    NASA Astrophysics Data System (ADS)

    Ducret, Gabriel; Doin, Marie-Pierre; Lasserre, Cécile; Guillaso, Stéphane; Twardzik, Cedric

    2010-05-01

    In order to increase our knowledge on the lithosphere rheological structure under the Tibetan plateau, we study the loading response due to lake Siling Co water level changes. The challenge here is to measure the deformation with an accuracy good enough to obtain a correct sensivity to model parameters. InSAR method in theory allow to observe the spatio-temporal pattern of deformation, however its exploitation is limited by unwrapping difficulties linked with temporal decorrelation and DEM errors in sloppy and partially incoherent areas. This lake is a large endhoreic lake at 4500~m elevation located North of the strike-slip right lateral Gyaring Co fault, and just to the south of the Bangong Nujiang suture zone, on which numerous left-lateral strike slip are branching. The Siling Co lake water level has strongly changed in the past, as testified by numerous traces of palaeo-shorelines, clearly marked until 60 m above present-day level. In the last years, the water level in this lake increased by about 1~m/yr, a remarkably fast rate given the large lake surface (1600~km2). The present-day ground subsidence associated to the water level increase is studied by InSAR using all ERS and Envisat archived data on track 219, obtained through the Dragon cooperation program. We chose to compute 750~km long differential interferograms centered on the lake to provide a good constraint on the reference. A redundant network of small baseline interferograms is computed with perpendicular baseline smaller than 500~m. The coherence is quickly lost with time (over one year), particularly to the North of the lake because of freeze-thaw cycles. Unwrapping thus becomes hazardous in this configuration, and fails on phase jumps created by DEM contrasts. The first work is to improve the simulated elevation field in radar geometry from the Digital Elevation Model (here SRTM) in order to exploit the interferometric phase in layover areas. Then, to estimate DEM error, we mix the Permanent Scattered and Small Baseline methods. The aim is to improve spatial and temporal coherence. We use as a reference strong and stable amplitude points or spatially coherent areas, scattered within the SAR scene. We calculate the relative elevation error of every point in the neighbourhood of reference points. A global inversion allows to perform spatial integration of local errors at the radar image scale. Finally, we evaluate how the DEM correct ion of wrapped interferograms improves the unwrapping step. Furthermore, to help unwrapping we also compute and then remove from the wrapped interferograms the residual orbital trend and the phase-elevation relationship due variations in atmospheric stratification. Stack of unwrapped small baseline interferograms show clearly the average subsidence rate around the lake of about 4 mm/yr associated to the present-day water level increase. To compare the observed deformation to the water level elevation changes, we extract from satellite images in the period 1972 to 2009 the water level changes. The deformation signal is discussed in terms of end-members visco-elastic models of the lithosphere and uppermost mantle.

  4. [A site index model for Larix principis-rupprechtii plantation in Saihanba, north China].

    PubMed

    Wang, Dong-zhi; Zhang, Dong-yan; Jiang, Feng-ling; Bai, Ye; Zhang, Zhi-dong; Huang, Xuan-rui

    2015-11-01

    It is often difficult to estimate site indices for different types of plantation by using an ordinary site index model. The objective of this paper was to establish a site index model for plantations in varied site conditions, and assess the site qualities. In this study, a nonlinear mixed site index model was constructed based on data from the second class forest resources inventory and 173 temporary sample plots. The results showed that the main limiting factors for height growth of Larix principis-rupprechtii were elevation, slope, soil thickness and soil type. A linear regression model was constructed for the main constraining site factors and dominant tree height, with the coefficient of determination being 0.912, and the baseline age of Larix principis-rupprechtii determined as 20 years. The nonlinear mixed site index model parameters for the main site types were estimated (R2 > 0.85, the error between the predicted value and the actual value was in the range of -0.43 to 0.45, with an average root mean squared error (RMSE) in the range of 0.907 to 1.148). The estimation error between the predicted value and the actual value of dominant tree height for the main site types was in the confidence interval of [-0.95, 0.95]. The site quality of the high altitude-shady-sandy loam-medium soil layer was the highest and that of low altitude-sunny-sandy loam-medium soil layer was the lowest, while the other two sites were moderate.

  5. Design of experiments-based monitoring of critical quality attributes for the spray-drying process of insulin by NIR spectroscopy.

    PubMed

    Maltesen, Morten Jonas; van de Weert, Marco; Grohganz, Holger

    2012-09-01

    Moisture content and aerodynamic particle size are critical quality attributes for spray-dried protein formulations. In this study, spray-dried insulin powders intended for pulmonary delivery were produced applying design of experiments methodology. Near infrared spectroscopy (NIR) in combination with preprocessing and multivariate analysis in the form of partial least squares projections to latent structures (PLS) were used to correlate the spectral data with moisture content and aerodynamic particle size measured by a time of flight principle. PLS models predicting the moisture content were based on the chemical information of the water molecules in the NIR spectrum. Models yielded prediction errors (RMSEP) between 0.39% and 0.48% with thermal gravimetric analysis used as reference method. The PLS models predicting the aerodynamic particle size were based on baseline offset in the NIR spectra and yielded prediction errors between 0.27 and 0.48 μm. The morphology of the spray-dried particles had a significant impact on the predictive ability of the models. Good predictive models could be obtained for spherical particles with a calibration error (RMSECV) of 0.22 μm, whereas wrinkled particles resulted in much less robust models with a Q (2) of 0.69. Based on the results in this study, NIR is a suitable tool for process analysis of the spray-drying process and for control of moisture content and particle size, in particular for smooth and spherical particles.

  6. Calibration of visually guided reaching is driven by error-corrective learning and internal dynamics.

    PubMed

    Cheng, Sen; Sabes, Philip N

    2007-04-01

    The sensorimotor calibration of visually guided reaching changes on a trial-to-trial basis in response to random shifts in the visual feedback of the hand. We show that a simple linear dynamical system is sufficient to model the dynamics of this adaptive process. In this model, an internal variable represents the current state of sensorimotor calibration. Changes in this state are driven by error feedback signals, which consist of the visually perceived reach error, the artificial shift in visual feedback, or both. Subjects correct for > or =20% of the error observed on each movement, despite being unaware of the visual shift. The state of adaptation is also driven by internal dynamics, consisting of a decay back to a baseline state and a "state noise" process. State noise includes any source of variability that directly affects the state of adaptation, such as variability in sensory feedback processing, the computations that drive learning, or the maintenance of the state. This noise is accumulated in the state across trials, creating temporal correlations in the sequence of reach errors. These correlations allow us to distinguish state noise from sensorimotor performance noise, which arises independently on each trial from random fluctuations in the sensorimotor pathway. We show that these two noise sources contribute comparably to the overall magnitude of movement variability. Finally, the dynamics of adaptation measured with random feedback shifts generalizes to the case of constant feedback shifts, allowing for a direct comparison of our results with more traditional blocked-exposure experiments.

  7. Data mining: Potential applications in research on nutrition and health.

    PubMed

    Batterham, Marijka; Neale, Elizabeth; Martin, Allison; Tapsell, Linda

    2017-02-01

    Data mining enables further insights from nutrition-related research, but caution is required. The aim of this analysis was to demonstrate and compare the utility of data mining methods in classifying a categorical outcome derived from a nutrition-related intervention. Baseline data (23 variables, 8 categorical) on participants (n = 295) in an intervention trial were used to classify participants in terms of meeting the criteria of achieving 10 000 steps per day. Results from classification and regression trees (CARTs), random forests, adaptive boosting, logistic regression, support vector machines and neural networks were compared using area under the curve (AUC) and error assessments. The CART produced the best model when considering the AUC (0.703), overall error (18%) and within class error (28%). Logistic regression also performed reasonably well compared to the other models (AUC 0.675, overall error 23%, within class error 36%). All the methods gave different rankings of variables' importance. CART found that body fat, quality of life using the SF-12 Physical Component Summary (PCS) and the cholesterol: HDL ratio were the most important predictors of meeting the 10 000 steps criteria, while logistic regression showed the SF-12PCS, glucose levels and level of education to be the most significant predictors (P ≤ 0.01). Differing outcomes suggest caution is required with a single data mining method, particularly in a dataset with nonlinear relationships and outliers and when exploring relationships that were not the primary outcomes of the research. © 2017 Dietitians Association of Australia.

  8. Error analyses of JEM/SMILES standard products on L2 operational system

    NASA Astrophysics Data System (ADS)

    Mitsuda, C.; Takahashi, C.; Suzuki, M.; Hayashi, H.; Imai, K.; Sano, T.; Takayanagi, M.; Iwata, Y.; Taniguchi, H.

    2009-12-01

    SMILES (Superconducting Submillimeter-wave Limb-Emission Sounder) , which has been developed by Japan Aerospace Exploration Agency (JAXA) and National Institute of Information and Communications Technology (NICT), is planned to be launched in September, 2009 and will be on board the Japanese Experiment Module (JEM) of the International Space Station (ISS). The SMILES measures the atmospheric limb emission from stratospheric minor constituents in 640 GHz band. Target species on L2 operational system are O3, ClO, HCl, HNO3, HOCl, CH3CN, HO2, BrO, and O3 isotopes (18OOO, 17OOO and O17OO). The SMILES carries 4 K cooled Superconductor-Insulator-Superconductor mixers to carry out high-sensitivity observations. In sub-millimeter band, water vapor absorption is an important factor to decide the tropospheric and stratospheric brightness temperature. The uncertainty of water vapor absorption influences the accuracy of molecular vertical profiles. Since the SMILES bands are narrow and far from H2O lines, it is a good approximation to assume this uncertainly as linear function of frequency. We include 0th and 1st coefficients of ‘baseline’ function, not water vapor profile, in state vector and retrieve them to remove influence of the water vapor uncertainty. We performed retrieval simulations using spectra computed by L2 operatinal forward model for various H2O conditions (-/+ 5, 10% difference between true profile and a priori profile in the stratosphere and -/+ 10, 20% one in the troposphere). The results show that the incremental errors of molecules are smaller than 10% of measurements errors when height correlation of baseline coefficients and temperature are assumed to be 10 km. In conclusion, the retrieval of the baseline coefficients effectively suppresses profile error due to bias of water vapor profile.

  9. Modifying the affective behavior of preschoolers with autism using in-vivo or video modeling and reinforcement contingencies.

    PubMed

    Gena, Angeliki; Couloura, Sophia; Kymissis, Effie

    2005-10-01

    The purpose of this study was to modify the affective behavior of three preschoolers with autism in home settings and in the context of play activities, and to compare the effects of video modeling to the effects of in-vivo modeling in teaching these children contextually appropriate affective responses. A multiple-baseline design across subjects, with a return to baseline condition, was used to assess the effects of treatment that consisted of reinforcement, video modeling, in-vivo modeling, and prompting. During training trials, reinforcement in the form of verbal praise and tokens was delivered contingent upon appropriate affective responding. Error correction procedures differed for each treatment condition. In the in-vivo modeling condition, the therapist used modeling and verbal prompting. In the video modeling condition, video segments of a peer modeling the correct response and verbal prompting by the therapist were used as corrective procedures. Participants received treatment in three categories of affective behavior--sympathy, appreciation, and disapproval--and were presented with a total of 140 different scenarios. The study demonstrated that both treatments--video modeling and in-vivo modeling--systematically increased appropriate affective responding in all response categories for the three participants. Additionally, treatment effects generalized across responses to untrained scenarios, the child's mother, new therapists, and time.

  10. Automatic detection of MLC relative position errors for VMAT using the EPID-based picket fence test

    NASA Astrophysics Data System (ADS)

    Christophides, Damianos; Davies, Alex; Fleckney, Mark

    2016-12-01

    Multi-leaf collimators (MLCs) ensure the accurate delivery of treatments requiring complex beam fluences like intensity modulated radiotherapy and volumetric modulated arc therapy. The purpose of this work is to automate the detection of MLC relative position errors  ⩾0.5 mm using electronic portal imaging device-based picket fence tests and compare the results to the qualitative assessment currently in use. Picket fence tests with and without intentional MLC errors were measured weekly on three Varian linacs. The picket fence images analysed covered a time period ranging between 14-20 months depending on the linac. An algorithm was developed that calculated the MLC error for each leaf-pair present in the picket fence images. The baseline error distributions of each linac were characterised for an initial period of 6 months and compared with the intentional MLC errors using statistical metrics. The distributions of median and one-sample Kolmogorov-Smirnov test p-value exhibited no overlap between baseline and intentional errors and were used retrospectively to automatically detect MLC errors in routine clinical practice. Agreement was found between the MLC errors detected by the automatic method and the fault reports during clinical use, as well as interventions for MLC repair and calibration. In conclusion the method presented provides for full automation of MLC quality assurance, based on individual linac performance characteristics. The use of the automatic method has been shown to provide early warning for MLC errors that resulted in clinical downtime.

  11. Intonation and dialog context as constraints for speech recognition.

    PubMed

    Taylor, P; King, S; Isard, S; Wright, H

    1998-01-01

    This paper describes a way of using intonation and dialog context to improve the performance of an automatic speech recognition (ASR) system. Our experiments were run on the DCIEM Maptask corpus, a corpus of spontaneous task-oriented dialog speech. This corpus has been tagged according to a dialog analysis scheme that assigns each utterance to one of 12 "move types," such as "acknowledge," "query-yes/no" or "instruct." Most ASR systems use a bigram language model to constrain the possible sequences of words that might be recognized. Here we use a separate bigram language model for each move type. We show that when the "correct" move-specific language model is used for each utterance in the test set, the word error rate of the recognizer drops. Of course when the recognizer is run on previously unseen data, it cannot know in advance what move type the speaker has just produced. To determine the move type we use an intonation model combined with a dialog model that puts constraints on possible sequences of move types, as well as the speech recognizer likelihoods for the different move-specific models. In the full recognition system, the combination of automatic move type recognition with the move specific language models reduces the overall word error rate by a small but significant amount when compared with a baseline system that does not take intonation or dialog acts into account. Interestingly, the word error improvement is restricted to "initiating" move types, where word recognition is important. In "response" move types, where the important information is conveyed by the move type itself--for example, positive versus negative response--there is no word error improvement, but recognition of the response types themselves is good. The paper discusses the intonation model, the language models, and the dialog model in detail and describes the architecture in which they are combined.

  12. The US Navy Coastal Surge and Inundation Prediction System (CSIPS): Making Forecasts Easier

    DTIC Science & Technology

    2013-02-14

    produced the best results Peak Water Level Percent Error CD Formulation LAWMA , Amerada Pass Freshwater Canal Locks Calcasieu Pass Sabine Pass...Conclusions Ongoing Work 16 Baseline Simulation Results Peak Water Level Percent Error LAWMA , Amerada Pass Freshwater Canal Locks Calcasieu Pass...Conclusions Ongoing Work 20 Sensitivity Studies Waves Run Water Level – Percent Error of Peak HWM MAPE Lawma , Armeda Pass Freshwater

  13. Impact of Stewardship Interventions on Antiretroviral Medication Errors in an Urban Medical Center: A 3-Year, Multiphase Study.

    PubMed

    Zucker, Jason; Mittal, Jaimie; Jen, Shin-Pung; Cheng, Lucy; Cennimo, David

    2016-03-01

    There is a high prevalence of HIV infection in Newark, New Jersey, with University Hospital admitting approximately 600 HIV-infected patients per year. Medication errors involving antiretroviral therapy (ART) could significantly affect treatment outcomes. The goal of this study was to evaluate the effectiveness of various stewardship interventions in reducing the prevalence of prescribing errors involving ART. This was a retrospective review of all inpatients receiving ART for HIV treatment during three distinct 6-month intervals over a 3-year period. During the first year, the baseline prevalence of medication errors was determined. During the second year, physician and pharmacist education was provided, and a computerized order entry system with drug information resources and prescribing recommendations was implemented. Prospective audit of ART orders with feedback was conducted in the third year. Analyses and comparisons were made across the three phases of this study. Of the 334 patients with HIV admitted in the first year, 45% had at least one antiretroviral medication error and 38% had uncorrected errors at the time of discharge. After education and computerized order entry, significant reductions in medication error rates were observed compared to baseline rates; 36% of 315 admissions had at least one error and 31% had uncorrected errors at discharge. While the prevalence of antiretroviral errors in year 3 was similar to that of year 2 (37% of 276 admissions), there was a significant decrease in the prevalence of uncorrected errors at discharge (12%) with the use of prospective review and intervention. Interventions, such as education and guideline development, can aid in reducing ART medication errors, but a committed stewardship program is necessary to elicit the greatest impact. © 2016 Pharmacotherapy Publications, Inc.

  14. Self-refraction, ready-made glasses and quality of life among rural myopic Chinese children: a non-inferiority randomized trial.

    PubMed

    Zhou, Zhongqiang; Chen, Tingting; Jin, Ling; Zheng, Dongxing; Chen, Shangji; He, Mingguang; Silver, Josh; Ellwein, Leon; Moore, Bruce; Congdon, Nathan G

    2017-09-01

    To study, for the first time, the effect of wearing ready-made glasses and glasses with power determined by self-refraction on children's quality of life. This is a randomized, double-masked non-inferiority trial. Children in grades 7 and 8 (age 12-15 years) in nine Chinese secondary schools, with presenting visual acuity (VA) ≤6/12 improved with refraction to ≥6/7.5 bilaterally, refractive error ≤-1.0 D and <2.0 D of anisometropia and astigmatism bilaterally, were randomized to receive ready-made spectacles (RM) or identical-appearing spectacles with power determined by: subjective cycloplegic retinoscopy by a university optometrist (U), a rural refractionist (R) or non-cycloplegic self-refraction (SR). Main study outcome was global score on the National Eye Institute Refractive Error Quality of Life-42 (NEI-RQL-42) after 2 months of wearing study glasses, comparing other groups with the U group, adjusting for baseline score. Only one child (0.18%) was excluded for anisometropia or astigmatism. A total of 426 eligible subjects (mean age 14.2 years, 84.5% without glasses at baseline) were allocated to U [103 (24.2%)], RM [113 (26.5%)], R [108 (25.4%)] and SR [102 (23.9%)] groups, respectively. Baseline and endline score data were available for 398 (93.4%) of subjects. In multiple regression models adjusting for baseline score, older age (p = 0.003) and baseline spectacle wear (p = 0.016), but not study group assignment, were significantly associated with lower final score. Quality of life wearing ready-mades or glasses based on self-refraction did not differ from that with cycloplegic refraction by an experienced optometrist in this non-inferiority trial. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  15. On the geodetic applications of simultaneous range-differencing to LAGEOS

    NASA Technical Reports Server (NTRS)

    Pablis, E. C.

    1982-01-01

    The possibility of improving the accuracy of geodetic results by use of simultaneously observed ranges to Lageos, in a differencing mode, from pairs of stations was studied. Simulation tests show that model errors can be effectively minimized by simultaneous range differencing (SRD) for a rather broad class of network satellite pass configurations. The methods of least squares approximation are compared with monomials and Chebyshev polynomials and the cubic spline interpolation. Analysis of three types of orbital biases (radial, along- and across track) shows that radial biases are the ones most efficiently minimized in the SRC mode. The degree to which the other two can be minimized depends on the type of parameters under estimation and the geometry of the problem. Sensitivity analyses of the SRD observation show that for baseline length estimations the most useful data are those collected in a direction parallel to the baseline and at a low elevation. Estimating individual baseline lengths with respect to an assumed but fixed orbit not only decreases the cost, but it further reduces the effects of model biases on the results as opposed to a network solution. Analogous results and conclusions are obtained for the estimates of the coordinates of the pole.

  16. Assessing Variability and Errors in Historical Runoff Forecasting with Physical Models and Alternative Data Sources

    NASA Astrophysics Data System (ADS)

    Penn, C. A.; Clow, D. W.; Sexstone, G. A.

    2017-12-01

    Water supply forecasts are an important tool for water resource managers in areas where surface water is relied on for irrigating agricultural lands and for municipal water supplies. Forecast errors, which correspond to inaccurate predictions of total surface water volume, can lead to mis-allocated water and productivity loss, thus costing stakeholders millions of dollars. The objective of this investigation is to provide water resource managers with an improved understanding of factors contributing to forecast error, and to help increase the accuracy of future forecasts. In many watersheds of the western United States, snowmelt contributes 50-75% of annual surface water flow and controls both the timing and volume of peak flow. Water supply forecasts from the Natural Resources Conservation Service (NRCS), National Weather Service, and similar cooperators use precipitation and snowpack measurements to provide water resource managers with an estimate of seasonal runoff volume. The accuracy of these forecasts can be limited by available snowpack and meteorological data. In the headwaters of the Rio Grande, NRCS produces January through June monthly Water Supply Outlook Reports. This study evaluates the accuracy of these forecasts since 1990, and examines what factors may contribute to forecast error. The Rio Grande headwaters has experienced recent changes in land cover from bark beetle infestation and a large wildfire, which can affect hydrological processes within the watershed. To investigate trends and possible contributing factors in forecast error, a semi-distributed hydrological model was calibrated and run to simulate daily streamflow for the period 1990-2015. Annual and seasonal watershed and sub-watershed water balance properties were compared with seasonal water supply forecasts. Gridded meteorological datasets were used to assess changes in the timing and volume of spring precipitation events that may contribute to forecast error. Additionally, a spatially-distributed physics-based snow model was used to assess possible effects of land cover change on snowpack properties. Trends in forecasted error are variable while baseline model results show a consistent under-prediction in the recent decade, highlighting possible compounding effects of climate and land cover changes.

  17. Ballistic intercept missions to Comet Encke

    NASA Technical Reports Server (NTRS)

    Mumma, M. (Compiler)

    1975-01-01

    The optimum ballistic intercept of a spacecraft with the comet Encke is determined. The following factors are considered in the analysis: energy requirements, encounter conditions, targeting error, comet activity, spacecraft engineering requirements and restraints, communications, and scientific return of the mission. A baseline model is formulated which includes the basic elements necessary to estimate the scientific return for the different missions considered. Tradeoffs which have major impact on the cost and/or scientific return of a ballistic mission to comet Encke are identified and discussed. Recommendations are included.

  18. Error analysis for a spaceborne laser ranging system

    NASA Technical Reports Server (NTRS)

    Pavlis, E. C.

    1979-01-01

    The dependence (or independence) of baseline accuracies, obtained from a typical mission of a spaceborne ranging system, on several factors is investigated. The emphasis is placed on a priori station information, but factors such as the elevation cut-off angle, the geometry of the network, the mean orbital height, and to a limited extent geopotential modeling are also examined. The results are obtained through simulations, but some theoretical justification is also given. Guidelines for freeing the results from these dependencies are suggested for most of the factors.

  19. NSR Modeling Emission Baselines

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  20. Control of Formation-Flying Multi-Element Space Interferometers with Direct Interferometer-Output Feedback

    NASA Technical Reports Server (NTRS)

    Lu, Hui-Ling; Cheng, H. L.; Lyon, Richard G.; Carpenter, Kenneth G.

    2007-01-01

    The long-baseline space interferometer concept involving formation flying of multiple spacecraft holds great promise as future space missions for high-resolution imagery. A major challenge of obtaining high-quality interferometric synthesized images from long-baseline space interferometers is to accurately control these spacecraft and their optics payloads in the specified configuration. Our research focuses on the determination of the optical errors to achieve fine control of long-baseline space interferometers without resorting to additional sensing equipment. We present a suite of estimation tools that can effectively extract from the raw interferometric image relative x/y, piston translational and tip/tilt deviations at the exit pupil aperture. The use of these error estimates in achieving control of the interferometer elements is demonstrated using simulated as well as laboratory-collected interferometric stellar images.

  1. Control of Formation-Flying Multi-Element Space Interferometers with Direct Interferometer-Output Feedback

    NASA Technical Reports Server (NTRS)

    Lu, Hui-Ling; Cheng, Victor H. L.; Lyon, Richard G.; Carpenter, Kenneth G.

    2007-01-01

    The long-baseline space interferometer concept involving formation flying of multiple spacecrafts holds great promise as future space missions for high-resolution imagery. A major challenge of obtaining high-quality interferometric synthesized images from long-baseline space interferometers is to accurately control these spacecraft and their optics payloads in the specified configuration. Our research focuses on the determination of the optical errors to achieve fine control of long-baseline space interferometers without resorting to additional sensing equipment. We present a suite of estimation tools that can effectively extract from the raw interferometric image relative x/y, piston translational and tip/tilt deviations at the exit pupil aperture. The use of these error estimates in achieving control of the interferometer elements is demonstrated using simulated as well as laboratory-collected interferometric stellar images.

  2. Longitudinal Improvement in Balance Error Scoring System Scores among NCAA Division-I Football Athletes.

    PubMed

    Mathiasen, Ross; Hogrefe, Christopher; Harland, Kari; Peterson, Andrew; Smoot, M Kyle

    2018-02-15

    The Balance Error Scoring System (BESS) is a commonly used concussion assessment tool. Recent studies have questioned the stability and reliability of baseline BESS scores. The purpose of this longitudinal prospective cohort study is to examine differences in yearly baseline BESS scores in athletes participating on an NCAA Division-I football team. NCAA Division-I freshman football athletes were videotaped performing the BESS test at matriculation and after 1 year of participation in the football program. Twenty-three athletes were enrolled in year 1 of the study, and 25 athletes were enrolled in year 2. Those athletes enrolled in year 1 were again videotaped after year 2 of the study. The paired t-test was used to assess for change in score over time for the firm surface, foam surface, and the cumulative BESS score. Additionally, inter- and intrarater reliability values were calculated. Cumulative errors on the BESS significantly decreased from a mean of 20.3 at baseline to 16.8 after 1 year of participation. The mean number of errors following the second year of participation was 15.0. Inter-rater reliability for the cumulative score ranged from 0.65 to 0.75. Intrarater reliability was 0.81. After 1 year of participation, there is a statistically and clinically significant improvement in BESS scores in an NCAA Division-I football program. Although additional improvement in BESS scores was noted after a second year of participation, it did not reach statistical significance. Football athletes should undergo baseline BESS testing at least yearly if the BESS is to be optimally useful as a diagnostic test for concussion.

  3. Effect of tailored on-road driving lessons on driving safety in older adults: A randomised controlled trial.

    PubMed

    Anstey, Kaarin J; Eramudugolla, Ranmalee; Kiely, Kim M; Price, Jasmine

    2018-06-01

    We evaluated the effectiveness of individually tailored driving lessons compared with a road rules refresher course for improving older driver safety. Two arm parallel randomised controlled trial, involving current drivers aged 65 and older (Mean age 72.0, 47.4% male) residing in Canberra, Australia. The intervention group (n = 28) received a two-hour class-based road rules refresher course, and two one-hour driving lessons tailored to improve poor driving skills and habits identified in a baseline on-road assessment. The control group (n = 29) received the road rules refresher course only. Tests of cognitive performance, and on-road driving were conducted at baseline and at 12-weeks. Main outcome measure was the Driver safety rating (DSR) on the on-road driving test. The number of Critical Errors made during the on-road was also recorded. 55 drivers completed the trial (intervention group: 27, control group: 28). Both groups showed reduction in dangerous/hazardous driver errors that required instructor intervention. From baseline to follow-up there was a greater reduction in the number of critical errors made by the intervention group relative to the control group (IRR = 0.53, SE = 0.1, p = .008). The intervention group improved on the DSR more than the control group (intervention mean change = 1.07 SD = 2.00, control group mean change = 0.32 SD = 1.61). The intervention group had 64% remediation of unsafe driving, where drivers who achieved a score of 'fail' at baseline, 'passed' at follow-up. The control group had 25% remediation. Tailored driving lessons reduced the critical driving errors made by older adults. Longer term follow-up and larger trials are required. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Effect of external digital elevation model on monitoring of mine subsidence by two-pass differential interferometric synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Tao, Qiuxiang; Gao, Tengfei; Liu, Guolin; Wang, Zhiwei

    2017-04-01

    The external digital elevation model (DEM) error is one of the main factors that affect the accuracy of mine subsidence monitored by two-pass differential interferometric synthetic aperture radar (DInSAR), which has been widely used in monitoring mining-induced subsidence. The theoretical relationship between external DEM error and monitored deformation error is derived based on the principles of interferometric synthetic aperture radar (DInSAR) and two-pass DInSAR. Taking the Dongtan and Yangcun mine areas of Jining as test areas, the difference and accuracy of 1:50000, ASTER GDEM V2, and SRTM DEMs are compared and analyzed. Two interferometric pairs of Advanced Land Observing Satellite Phased Array L-band SAR covering the test areas are processed using two-pass DInSAR with three external DEMs to compare and analyze the effect of three external DEMs on monitored mine subsidence in high- and low-coherence subsidence regions. Moreover, the reliability and accuracy of the three DInSAR-monitored results are compared and verified with leveling-measured subsidence values. Results show that the effect of external DEM on mine subsidence monitored by two-pass DInSAR is not only related to radar look angle, perpendicular baseline, slant range, and external DEM error, but also to the ground resolution of DEM, the magnitude of subsidence, and the coherence of test areas.

  5. Neural network application to aircraft control system design

    NASA Technical Reports Server (NTRS)

    Troudet, Terry; Garg, Sanjay; Merrill, Walter C.

    1991-01-01

    The feasibility of using artificial neural networks as control systems for modern, complex aerospace vehicles is investigated via an example aircraft control design study. The problem considered is that of designing a controller for an integrated airframe/propulsion longitudinal dynamics model of a modern fighter aircraft to provide independent control of pitch rate and airspeed responses to pilot command inputs. An explicit model following controller using H infinity control design techniques is first designed to gain insight into the control problem as well as to provide a baseline for evaluation of the neurocontroller. Using the model of the desired dynamics as a command generator, a multilayer feedforward neural network is trained to control the vehicle model within the physical limitations of the actuator dynamics. This is achieved by minimizing an objective function which is a weighted sum of tracking errors and control input commands and rates. To gain insight in the neurocontrol, linearized representations of the nonlinear neurocontroller are analyzed along a commanded trajectory. Linear robustness analysis tools are then applied to the linearized neurocontroller models and to the baseline H infinity based controller. Future areas of research are identified to enhance the practical applicability of neural networks to flight control design.

  6. Neural network application to aircraft control system design

    NASA Technical Reports Server (NTRS)

    Troudet, Terry; Garg, Sanjay; Merrill, Walter C.

    1991-01-01

    The feasibility of using artificial neural network as control systems for modern, complex aerospace vehicles is investigated via an example aircraft control design study. The problem considered is that of designing a controller for an integrated airframe/propulsion longitudinal dynamics model of a modern fighter aircraft to provide independent control of pitch rate and airspeed responses to pilot command inputs. An explicit model following controller using H infinity control design techniques is first designed to gain insight into the control problem as well as to provide a baseline for evaluation of the neurocontroller. Using the model of the desired dynamics as a command generator, a multilayer feedforward neural network is trained to control the vehicle model within the physical limitations of the actuator dynamics. This is achieved by minimizing an objective function which is a weighted sum of tracking errors and control input commands and rates. To gain insight in the neurocontrol, linearized representations of the nonlinear neurocontroller are analyzed along a commanded trajectory. Linear robustness analysis tools are then applied to the linearized neurocontroller models and to the baseline H infinity based controller. Future areas of research identified to enhance the practical applicability of neural networks to flight control design.

  7. Validation and sensitivity of the FINE Bayesian network for forecasting aquatic exposure to nano-silver.

    PubMed

    Money, Eric S; Barton, Lauren E; Dawson, Joseph; Reckhow, Kenneth H; Wiesner, Mark R

    2014-03-01

    The adaptive nature of the Forecasting the Impacts of Nanomaterials in the Environment (FINE) Bayesian network is explored. We create an updated FINE model (FINEAgNP-2) for predicting aquatic exposure concentrations of silver nanoparticles (AgNP) by combining the expert-based parameters from the baseline model established in previous work with literature data related to particle behavior, exposure, and nano-ecotoxicology via parameter learning. We validate the AgNP forecast from the updated model using mesocosm-scale field data and determine the sensitivity of several key variables to changes in environmental conditions, particle characteristics, and particle fate. Results show that the prediction accuracy of the FINEAgNP-2 model increased approximately 70% over the baseline model, with an error rate of only 20%, suggesting that FINE is a reliable tool to predict aquatic concentrations of nano-silver. Sensitivity analysis suggests that fractal dimension, particle diameter, conductivity, time, and particle fate have the most influence on aquatic exposure given the current knowledge; however, numerous knowledge gaps can be identified to suggest further research efforts that will reduce the uncertainty in subsequent exposure and risk forecasts. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. The Spring 1985 high precision baseline test of the JPL GPS-based geodetic system

    NASA Technical Reports Server (NTRS)

    Davidson, John M.; Thornton, Catherine L.; Stephens, Scott A.; Blewitt, Geoffrey; Lichten, Stephen M.; Sovers, Ojars J.; Kroger, Peter M.; Skrumeda, Lisa L.; Border, James S.; Neilan, Ruth E.

    1987-01-01

    The Spring 1985 High Precision Baseline Test (HPBT) was conducted. The HPBT was designed to meet a number of objectives. Foremost among these was the demonstration of a level of accuracy of 1 to 2:10 to the 7th power, or better, for baselines ranging in length up to several hundred kilometers. These objectives were all met with a high degree of success, with respect to the demonstration of system accuracy in particular. The results from six baselines ranging in length from 70 to 729 km were examined for repeatability and, in the case of three baselines, were compared to results from colocated VLBI systems. Repeatability was found to be 5:10 to the 8th power (RMS) for the north baseline coordinate, independent of baseline length, while for the east coordinate RMS repeatability was found to be larger than this by factors of 2 to 4. The GPS-based results were found to be in agreement with those from colocated VLBI measurements, when corrected for the physical separations of the VLBI and CPG antennas, at the level of 1 to 2:10 to the 7th power in all coordinates, independent of baseline length. The results for baseline repeatability are consistent with the current GPA error budget, but the GPS-VLBI intercomparisons disagree at a somewhat larger level than expected. It is hypothesized that these differences may result from errors in the local survey measurements used to correct for the separations of the GPS and VLBI antenna reference centers.

  9. The Influence of Training Phase on Error of Measurement in Jump Performance.

    PubMed

    Taylor, Kristie-Lee; Hopkins, Will G; Chapman, Dale W; Cronin, John B

    2016-03-01

    The purpose of this study was to calculate the coefficients of variation in jump performance for individual participants in multiple trials over time to determine the extent to which there are real differences in the error of measurement between participants. The effect of training phase on measurement error was also investigated. Six subjects participated in a resistance-training intervention for 12 wk with mean power from a countermovement jump measured 6 d/wk. Using a mixed-model meta-analysis, differences between subjects, within-subject changes between training phases, and the mean error values during different phases of training were examined. Small, substantial factor differences of 1.11 were observed between subjects; however, the finding was unclear based on the width of the confidence limits. The mean error was clearly higher during overload training than baseline training, by a factor of ×/÷ 1.3 (confidence limits 1.0-1.6). The random factor representing the interaction between subjects and training phases revealed further substantial differences of ×/÷ 1.2 (1.1-1.3), indicating that on average, the error of measurement in some subjects changes more than in others when overload training is introduced. The results from this study provide the first indication that within-subject variability in performance is substantially different between training phases and, possibly, different between individuals. The implications of these findings for monitoring individuals and estimating sample size are discussed.

  10. Chaotropic salts in liquid chromatographic method development for the determination of pramipexole and its impurities following quality-by-design principles.

    PubMed

    Vemić, Ana; Rakić, Tijana; Malenović, Anđelija; Medenica, Mirjana

    2015-01-01

    The aim of this paper is to present a development of liquid chromatographic method when chaotropic salts are used as mobile phase additives following the QbD principles. The effect of critical process parameters (column chemistry, salt nature and concentration, acetonitrile content and column temperature) on the critical quality attributes (retention of the first and last eluting peak and separation of the critical peak pairs) was studied applying the design of experiments-design space methodology (DoE-DS). D-optimal design is chosen in order to simultaneously examine both categorical and numerical factors in minimal number of experiments. Two ways for the achievement of quality assurance were performed and compared. Namely, the uncertainty originating from the models was assessed by Monte Carlo simulations propagating the error equal to the variance of the model residuals and propagating the error originating from the model coefficients' calculation. The baseline separation of pramipexole and its five impurities is achieved fulfilling all the required criteria while the method validation proved its reliability. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Improving Exercise Performance with an Accelerometer-Based Smartphone App: A Randomized Controlled Trial.

    PubMed

    Bittel, Daniel C; Bittel, Adam J; Williams, Christine; Elazzazi, Ashraf

    2017-05-01

    Proper exercise form is critical for the safety and efficacy of therapeutic exercise. This research examines if a novel smartphone application, designed to monitor and provide real-time corrections during resistance training, can reduce performance errors and elicit a motor learning response. Forty-two participants aged 18 to 65 years were randomly assigned to treatment and control groups. Both groups were tested for the number of movement errors made during a 10-repetition set completed at baseline, immediately after, and 1 to 2 weeks after a single training session of knee extensions. The treatment group trained with real-time, smartphone-generated feedback, whereas the control subjects did not. Group performance (number of errors) was compared across test sets using a 2-factor mixed-model analysis of variance. No differences were observed between groups for age, sex, or resistance training experience. There was a significant interaction between test set and group. The treatment group demonstrated fewer errors on posttests 1 and 2 compared with pretest (P < 0.05). There was no reduction in the number of errors on any posttest for control subjects. Smartphone apps, such as the one used in this study, may enhance patient supervision, safety, and exercise efficacy across rehabilitation settings. A single training session with the app promoted motor learning and improved exercise performance.

  12. ANCOVA Versus CHANGE From Baseline in Nonrandomized Studies: The Difference.

    PubMed

    van Breukelen, Gerard J P

    2013-11-01

    The pretest-posttest control group design can be analyzed with the posttest as dependent variable and the pretest as covariate (ANCOVA) or with the difference between posttest and pretest as dependent variable (CHANGE). These 2 methods can give contradictory results if groups differ at pretest, a phenomenon that is known as Lord's paradox. Literature claims that ANCOVA is preferable if treatment assignment is based on randomization or on the pretest and questionable for preexisting groups. Some literature suggests that Lord's paradox has to do with measurement error in the pretest. This article shows two new things: First, the claims are confirmed by proving the mathematical equivalence of ANCOVA to a repeated measures model without group effect at pretest. Second, correction for measurement error in the pretest is shown to lead back to ANCOVA or to CHANGE, depending on the assumed absence or presence of a true group difference at pretest. These two new theoretical results are illustrated with multilevel (mixed) regression and structural equation modeling of data from two studies.

  13. An evaluation of space time cube representation of spatiotemporal patterns.

    PubMed

    Kristensson, Per Ola; Dahlbäck, Nils; Anundi, Daniel; Björnstad, Marius; Gillberg, Hanna; Haraldsson, Jonas; Mårtensson, Ingrid; Nordvall, Mathias; Ståhl, Josefine

    2009-01-01

    Space time cube representation is an information visualization technique where spatiotemporal data points are mapped into a cube. Information visualization researchers have previously argued that space time cube representation is beneficial in revealing complex spatiotemporal patterns in a data set to users. The argument is based on the fact that both time and spatial information are displayed simultaneously to users, an effect difficult to achieve in other representations. However, to our knowledge the actual usefulness of space time cube representation in conveying complex spatiotemporal patterns to users has not been empirically validated. To fill this gap, we report on a between-subjects experiment comparing novice users' error rates and response times when answering a set of questions using either space time cube or a baseline 2D representation. For some simple questions, the error rates were lower when using the baseline representation. For complex questions where the participants needed an overall understanding of the spatiotemporal structure of the data set, the space time cube representation resulted in on average twice as fast response times with no difference in error rates compared to the baseline. These results provide an empirical foundation for the hypothesis that space time cube representation benefits users analyzing complex spatiotemporal patterns.

  14. Three-dimensional computer-assisted study model analysis of long-term oral-appliance wear. Part 1: Methodology.

    PubMed

    Chen, Hui; Lowe, Alan A; de Almeida, Fernanda Riberiro; Wong, Mary; Fleetham, John A; Wang, Bangkang

    2008-09-01

    The aim of this study was to test a 3-dimensional (3D) computer-assisted dental model analysis system that uses selected landmarks to describe tooth movement during treatment with an oral appliance. Dental casts of 70 patients diagnosed with obstructive sleep apnea and treated with oral appliances for a mean time of 7 years 4 months were evaluated with a 3D digitizer (MicroScribe-3DX, Immersion, San Jose, Calif) compatible with the Rhinoceros modeling program (version 3.0 SR3c, Robert McNeel & Associates, Seattle, Wash). A total of 86 landmarks on each model were digitized, and 156 variables were calculated as either the linear distance between points or the distance from points to reference planes. Four study models for each patient (maxillary baseline, mandibular baseline, maxillary follow-up, and mandibular follow-up) were superimposed on 2 sets of reference points: 3 points on the palatal rugae for maxillary model superimposition, and 3 occlusal contact points for the same set of maxillary and mandibular model superimpositions. The patients were divided into 3 evaluation groups by 5 orthodontists based on the changes between baseline and follow-up study models. Digital dental measurements could be analyzed, including arch width, arch length, curve of Spee, overbite, overjet, and the anteroposterior relationship between the maxillary and mandibular arches. A method error within 0.23 mm in 14 selected variables was found for the 3D system. The statistical differences in the 3 evaluation groups verified the division criteria determined by the orthodontists. The system provides a method to record 3D measurements of study models that permits computer visualization of tooth position and movement from various perspectives.

  15. Deformation Estimation In Non-Urban Areas Exploiting High Resolution SAR Data

    NASA Astrophysics Data System (ADS)

    Goel, Kanika; Adam, Nico

    2012-01-01

    Advanced techniques such as the Small Baseline Subset Algorithm (SBAS) have been developed for terrain motion mapping in non-urban areas with a focus on extracting information from distributed scatterers (DSs). SBAS uses small baseline differential interferograms (to limit the effects of geometric decorrelation) and these are typically multilooked to reduce phase noise, resulting in loss of resolution. Various error sources e.g. phase unwrapping errors, topographic errors, temporal decorrelation and atmospheric effects also affect the interferometric phase. The aim of our work is an improved deformation monitoring in non-urban areas exploiting high resolution SAR data. The paper provides technical details and a processing example of a newly developed technique which incorporates an adaptive spatial phase filtering algorithm for an accurate high resolution differential interferometric stacking, followed by deformation retrieval via the SBAS approach where we perform the phase inversion using a more robust L1 norm minimization.

  16. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.

    PubMed

    Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard

    2017-04-01

    To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.

  17. Comparison of MLC error sensitivity of various commercial devices for VMAT pre-treatment quality assurance.

    PubMed

    Saito, Masahide; Sano, Naoki; Shibata, Yuki; Kuriyama, Kengo; Komiyama, Takafumi; Marino, Kan; Aoki, Shinichi; Ashizawa, Kazunari; Yoshizawa, Kazuya; Onishi, Hiroshi

    2018-05-01

    The purpose of this study was to compare the MLC error sensitivity of various measurement devices for VMAT pre-treatment quality assurance (QA). This study used four QA devices (Scandidos Delta4, PTW 2D-array, iRT systems IQM, and PTW Farmer chamber). Nine retrospective VMAT plans were used and nine MLC error plans were generated for all nine original VMAT plans. The IQM and Farmer chamber were evaluated using the cumulative signal difference between the baseline and error-induced measurements. In addition, to investigate the sensitivity of the Delta4 device and the 2D-array, global gamma analysis (1%/1, 2%/2, and 3%/3 mm), dose difference (1%, 2%, and 3%) were used between the baseline and error-induced measurements. Some deviations of the MLC error sensitivity for the evaluation metrics and MLC error ranges were observed. For the two ionization devices, the sensitivity of the IQM was significantly better than that of the Farmer chamber (P < 0.01) while both devices had good linearly correlation between the cumulative signal difference and the magnitude of MLC errors. The pass rates decreased as the magnitude of the MLC error increased for both Delta4 and 2D-array. However, the small MLC error for small aperture sizes, such as for lung SBRT, could not be detected using the loosest gamma criteria (3%/3 mm). Our results indicate that DD could be more useful than gamma analysis for daily MLC QA, and that a large-area ionization chamber has a greater advantage for detecting systematic MLC error because of the large sensitive volume, while the other devices could not detect this error for some cases with a small range of MLC error. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  18. Recoil polarization measurements for neutral pion electroproduction at Q2=1(GeV/c)2 near the Δ resonance

    NASA Astrophysics Data System (ADS)

    Kelly, J. J.; Gayou, O.; Roché, R. E.; Chai, Z.; Jones, M. K.; Sarty, A. J.; Frullani, S.; Aniol, K.; Beise, E. J.; Benmokhtar, F.; Bertozzi, W.; Boeglin, W. U.; Botto, T.; Brash, E. J.; Breuer, H.; Brown, E.; Burtin, E.; Calarco, J. R.; Cavata, C.; Chang, C. C.; Chant, N. S.; Chen, J.-P.; Coman, M.; Crovelli, D.; Leo, R. De; Dieterich, S.; Escoffier, S.; Fissum, K. G.; Garde, V.; Garibaldi, F.; Georgakopoulos, S.; Gilad, S.; Gilman, R.; Glashausser, C.; Hansen, J.-O.; Higinbotham, D. W.; Hotta, A.; Huber, G. M.; Ibrahim, H.; Iodice, M.; Jager, C. W. De; Jiang, X.; Klimenko, A.; Kozlov, A.; Kumbartzki, G.; Kuss, M.; Lagamba, L.; Laveissière, G.; Lerose, J. J.; Lindgren, R. A.; Liyange, N.; Lolos, G. J.; Lourie, R. W.; Margaziotis, D. J.; Marie, F.; Markowitz, P.; McAleer, S.; Meekins, D.; Michaels, R.; Milbrath, B. D.; Mitchell, J.; Nappa, J.; Neyret, D.; Perdrisat, C. F.; Potokar, M.; Punjabi, V. A.; Pussieux, T.; Ransome, R. D.; Roos, P. G.; Rvachev, M.; Saha, A.; Širca, S.; Suleiman, R.; Strauch, S.; Templon, J. A.; Todor, L.; Ulmer, P. E.; Urciuoli, G. M.; Weinstein, L. B.; Wijsooriya, K.; Wojtsekhowski, B.; Zheng, X.; Zhu, L.

    2007-02-01

    We measured angular distributions of differential cross section, beam analyzing power, and recoil polarization for neutral pion electroproduction at Q2=1.0(GeV/c)2 in 10 bins of 1.17⩽W⩽1.35 GeV across the Δ resonance. A total of 16 independent response functions were extracted, of which 12 were observed for the first time. Comparisons with recent model calculations show that response functions governed by real parts of interference products are determined relatively well near the physical mass, W=MΔ≈1.232 GeV, but the variation among models is large for response functions governed by imaginary parts, and for both types of response functions, the variation increases rapidly with W>MΔ. We performed a multipole analysis that adjusts suitable subsets of ℓπ⩽2 amplitudes with higher partial waves constrained by baseline models. This analysis provides both real and imaginary parts. The fitted multipole amplitudes are nearly model independent—there is very little sensitivity to the choice of baseline model or truncation scheme. By contrast, truncation errors in the traditional Legendre analysis of N→Δ quadrupole ratios are not negligible. Parabolic fits to the W dependence around MΔ for the multiple analysis gives values for Re(S1+/M1+)=(-6.61±0.18)% and Re(E1+/M1+)=(-2.87±0.19)% for the pπ0 channel at W=1.232 GeV and Q2=1.0(GeV/c)2 that are distinctly larger than those from the Legendre analysis of the same data. Similarly, the multipole analysis gives Re(S0+/M1+)=(+7.1±0.8)% at W=1.232 GeV, consistent with recent models, while the traditional Legendre analysis gives the opposite sign because its truncation errors are quite severe.

  19. Quantitative imaging features of pretreatment CT predict volumetric response to chemotherapy in patients with colorectal liver metastases.

    PubMed

    Creasy, John M; Midya, Abhishek; Chakraborty, Jayasree; Adams, Lauryn B; Gomes, Camilla; Gonen, Mithat; Seastedt, Kenneth P; Sutton, Elizabeth J; Cercek, Andrea; Kemeny, Nancy E; Shia, Jinru; Balachandran, Vinod P; Kingham, T Peter; Allen, Peter J; DeMatteo, Ronald P; Jarnagin, William R; D'Angelica, Michael I; Do, Richard K G; Simpson, Amber L

    2018-06-19

    This study investigates whether quantitative image analysis of pretreatment CT scans can predict volumetric response to chemotherapy for patients with colorectal liver metastases (CRLM). Patients treated with chemotherapy for CRLM (hepatic artery infusion (HAI) combined with systemic or systemic alone) were included in the study. Patients were imaged at baseline and approximately 8 weeks after treatment. Response was measured as the percentage change in tumour volume from baseline. Quantitative imaging features were derived from the index hepatic tumour on pretreatment CT, and features statistically significant on univariate analysis were included in a linear regression model to predict volumetric response. The regression model was constructed from 70% of data, while 30% were reserved for testing. Test data were input into the trained model. Model performance was evaluated with mean absolute prediction error (MAPE) and R 2 . Clinicopatholologic factors were assessed for correlation with response. 157 patients were included, split into training (n = 110) and validation (n = 47) sets. MAPE from the multivariate linear regression model was 16.5% (R 2 = 0.774) and 21.5% in the training and validation sets, respectively. Stratified by HAI utilisation, MAPE in the validation set was 19.6% for HAI and 25.1% for systemic chemotherapy alone. Clinical factors associated with differences in median tumour response were treatment strategy, systemic chemotherapy regimen, age and KRAS mutation status (p < 0.05). Quantitative imaging features extracted from pretreatment CT are promising predictors of volumetric response to chemotherapy in patients with CRLM. Pretreatment predictors of response have the potential to better select patients for specific therapies. • Colorectal liver metastases (CRLM) are downsized with chemotherapy but predicting the patients that will respond to chemotherapy is currently not possible. • Heterogeneity and enhancement patterns of CRLM can be measured with quantitative imaging. • Prediction model constructed that predicts volumetric response with 20% error suggesting that quantitative imaging holds promise to better select patients for specific treatments.

  20. BEAM-FORMING ERRORS IN MURCHISON WIDEFIELD ARRAY PHASED ARRAY ANTENNAS AND THEIR EFFECTS ON EPOCH OF REIONIZATION SCIENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neben, Abraham R.; Hewitt, Jacqueline N.; Dillon, Joshua S.

    2016-03-20

    Accurate antenna beam models are critical for radio observations aiming to isolate the redshifted 21 cm spectral line emission from the Dark Ages and the Epoch of Reionization (EOR) and unlock the scientific potential of 21 cm cosmology. Past work has focused on characterizing mean antenna beam models using either satellite signals or astronomical sources as calibrators, but antenna-to-antenna variation due to imperfect instrumentation has remained unexplored. We characterize this variation for the Murchison Widefield Array (MWA) through laboratory measurements and simulations, finding typical deviations of the order of ±10%–20% near the edges of the main lobe and in themore » sidelobes. We consider the ramifications of these results for image- and power spectrum-based science. In particular, we simulate visibilities measured by a 100 m baseline and find that using an otherwise perfect foreground model, unmodeled beam-forming errors severely limit foreground subtraction accuracy within the region of Fourier space contaminated by foreground emission (the “wedge”). This region likely contains much of the cosmological signal, and accessing it will require measurement of per-antenna beam patterns. However, unmodeled beam-forming errors do not contaminate the Fourier space region expected to be free of foreground contamination (the “EOR window”), showing that foreground avoidance remains a viable strategy.« less

  1. Supersonic Retropropulsion Experimental Results from the NASA Langley Unitary Plan Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Berry, Scott A.; Rhode, Matthew N.; Edquist, Karl T.; Player, Charles J.

    2011-01-01

    A new supersonic retropropulsion experimental effort, intended to provide code validation data, was recently completed in the Langley Research Center Unitary Plan Wind Tunnel Test Section 2 over the Mach number range from 2.4 to 4.6. The experimental model was designed using insights gained from pre-test computations, which were instrumental for sizing and refining the model to minimize tunnel wall interference and internal flow separation concerns. A 5-in diameter 70-deg sphere-cone forebody with a roughly 10-in long cylindrical aftbody was the baseline configuration selected for this study. The forebody was designed to accommodate up to four 4:1 area ratio supersonic nozzles. Primary measurements for this model were a large number of surface pressures on the forebody and aftbody. Supplemental data included high-speed Schlieren video and internal pressures and temperatures. The run matrix was developed to allow for the quantification of various sources of experimental uncertainty, such as random errors due to run-to-run variations and bias errors due to flow field or model misalignments. Preliminary results and observations from the test are presented, while detailed data and uncertainty analyses are ongoing.

  2. Limitations of bootstrap current models

    DOE PAGES

    Belli, Emily A.; Candy, Jefferey M.; Meneghini, Orso; ...

    2014-03-27

    We assess the accuracy and limitations of two analytic models of the tokamak bootstrap current: (1) the well-known Sauter model and (2) a recent modification of the Sauter model by Koh et al. For this study, we use simulations from the first-principles kinetic code NEO as the baseline to which the models are compared. Tests are performed using both theoretical parameter scans as well as core- to-edge scans of real DIII-D and NSTX plasma profiles. The effects of extreme aspect ratio, large impurity fraction, energetic particles, and high collisionality are studied. In particular, the error in neglecting cross-species collisional couplingmore » – an approximation inherent to both analytic models – is quantified. Moreover, the implications of the corrections from kinetic NEO simulations on MHD equilibrium reconstructions is studied via integrated modeling with kinetic EFIT.« less

  3. ECG fiducial point extraction using switching Kalman filter.

    PubMed

    Akhbari, Mahsa; Ghahjaverestan, Nasim Montazeri; Shamsollahi, Mohammad B; Jutten, Christian

    2018-04-01

    In this paper, we propose a novel method for extracting fiducial points (FPs) of the beats in electrocardiogram (ECG) signals using switching Kalman filter (SKF). In this method, according to McSharry's model, ECG waveforms (P-wave, QRS complex and T-wave) are modeled with Gaussian functions and ECG baselines are modeled with first order auto regressive models. In the proposed method, a discrete state variable called "switch" is considered that affects only the observation equations. We denote a mode as a specific observation equation and switch changes between 7 modes and corresponds to different segments of an ECG beat. At each time instant, the probability of each mode is calculated and compared among two consecutive modes and a path is estimated, which shows the relation of each part of the ECG signal to the mode with the maximum probability. ECG FPs are found from the estimated path. For performance evaluation, the Physionet QT database is used and the proposed method is compared with methods based on wavelet transform, partially collapsed Gibbs sampler (PCGS) and extended Kalman filter. For our proposed method, the mean error and the root mean square error across all FPs are 2 ms (i.e. less than one sample) and 14 ms, respectively. These errors are significantly smaller than those obtained using other methods. The proposed method achieves lesser RMSE and smaller variability with respect to others. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Fitting Photometry of Blended Microlensing Events

    NASA Astrophysics Data System (ADS)

    Thomas, Christian L.; Griest, Kim

    2006-03-01

    We reexamine the usefulness of fitting blended light-curve models to microlensing photometric data. We find agreement with previous workers (e.g., Woźniak & Paczyński) that this is a difficult proposition because of the degeneracy of blend fraction with other fit parameters. We show that follow-up observations at specific point along the light curve (peak region and wings) of high-magnification events are the most helpful in removing degeneracies. We also show that very small errors in the baseline magnitude can result in problems in measuring the blend fraction and study the importance of non-Gaussian errors in the fit results. The biases and skewness in the distribution of the recovered blend fraction is discussed. We also find a new approximation formula relating the blend fraction and the unblended fit parameters to the underlying event duration needed to estimate microlensing optical depth.

  5. Attitude control system conceptual design for the GOES-N spacecraft series

    NASA Technical Reports Server (NTRS)

    Markley, F. L.; Bauer, F. H.; Deily, J. J.; Femiano, M. D.

    1991-01-01

    The attitude determination sensing and processing of the system are considered, and inertial reference units, star trackers, and beacons and landmarks are discussed as well as an extended Kalman filter and expected attitude-determination performance. The baseline controller is overviewed, and a spacecraft motion compensation (SMC) algorithm, disturbance environment, and SMC performance expectations are covered. Detailed simulation results are presented, and emphasis is placed on dynamic models, attitude estimation and control, and SMC disturbance accommmodation. It is shown that the attitude control system employing gyro/star tracker sensing and active three-axis control with reaction wheels is capable of maintaining attitude errors of 1.7 microrad or less on all axes in the absence of attitude disturbances, and that the sensor line-of-sight pointing errors can be reduced to 0.1 microrad by SMC.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsuta, Y; Tohoku University Graduate School of Medicine, Sendal, Miyagi; Kadoya, N

    Purpose: In this study, we developed a system to calculate three dimensional (3D) dose that reflects dosimetric error caused by leaf miscalibration for head and neck and prostate volumetric modulated arc therapy (VMAT) without additional treatment planning system calculation on real time. Methods: An original system called clarkson dose calculation based dosimetric error calculation to calculate dosimetric error caused by leaf miscalibration was developed by MATLAB (Math Works, Natick, MA). Our program, first, calculates point doses at isocenter for baseline and modified VMAT plan, which generated by inducing MLC errors that enlarged aperture size of 1.0 mm with clarkson dosemore » calculation. Second, error incuced 3D dose was generated with transforming TPS baseline 3D dose using calculated point doses. Results: Mean computing time was less than 5 seconds. For seven head and neck and prostate plans, between our method and TPS calculated error incuced 3D dose, the 3D gamma passing rates (0.5%/2 mm, global) are 97.6±0.6% and 98.0±0.4%. The dose percentage change with dose volume histogram parameter of mean dose on target volume were 0.1±0.5% and 0.4±0.3%, and with generalized equivalent uniform dose on target volume were −0.2±0.5% and 0.2±0.3%. Conclusion: The erroneous 3D dose calculated by our method is useful to check dosimetric error caused by leaf miscalibration before pre treatment patient QA dosimetry checks.« less

  7. Evaluating the design of an earth radiation budget instrument with system simulations. Part 2: Minimization of instantaneous sampling errors for CERES-I

    NASA Technical Reports Server (NTRS)

    Stowe, Larry; Hucek, Richard; Ardanuy, Philip; Joyce, Robert

    1994-01-01

    Much of the new record of broadband earth radiation budget satellite measurements to be obtained during the late 1990s and early twenty-first century will come from the dual-radiometer Clouds and Earth's Radiant Energy System Instrument (CERES-I) flown aboard sun-synchronous polar orbiters. Simulation studies conducted in this work for an early afternoon satellite orbit indicate that spatial root-mean-square (rms) sampling errors of instantaneous CERES-I shortwave flux estimates will range from about 8.5 to 14.0 W/m on a 2.5 deg latitude and longitude grid resolution. Rms errors in longwave flux estimates are only about 20% as large and range from 1.5 to 3.5 W/sq m. These results are based on an optimal cross-track scanner design that includes 50% footprint overlap to eliminate gaps in the top-of-the-atmosphere coverage, and a 'smallest' footprint size to increase the ratio in the number of observations lying within to the number of observations lying on grid area boundaries. Total instantaneous measurement error also depends on the variability of anisotropic reflectance and emission patterns and on retrieval methods used to generate target area fluxes. Three retrieval procedures from both CERES-I scanners (cross-track and rotating azimuth plane) are used. (1) The baseline Earth Radiaton Budget Experiment (ERBE) procedure, which assumes that errors due to the use of mean angular dependence models (ADMs) in the radiance-to-flux inversion process nearly cancel when averaged over grid areas. (2) To estimate N, instantaneous ADMs are estimated from the multiangular, collocated observations of the two scanners. These observed models replace the mean models in computation of satellite flux estimates. (3) The scene flux approach, conducts separate target-area retrievals for each ERBE scene category and combines their results using area weighting by scene type. The ERBE retrieval performs best when the simulated radiance field departs from the ERBE mean models by less than 10%. For larger perturbations, both the scene flux and collocation methods produce less error than the ERBE retrieval. The scene flux technique is preferable, however, because it involves fewer restrictive assumptions.

  8. Effects of the TRPV1 antagonist ABT-102 on body temperature in healthy volunteers: pharmacokinetic/pharmacodynamic analysis of three phase 1 trials

    PubMed Central

    Othman, Ahmed A; Nothaft, Wolfram; Awni, Walid M; Dutta, Sandeep

    2013-01-01

    Aim To characterize quantitatively the relationship between ABT-102, a potent and selective TRPV1 antagonist, exposure and its effects on body temperature in humans using a population pharmacokinetic/pharmacodynamic modelling approach. Methods Serial pharmacokinetic and body temperature (oral or core) measurements from three double-blind, randomized, placebo-controlled studies [single dose (2, 6, 18, 30 and 40 mg, solution formulation), multiple dose (2, 4 and 8 mg twice daily for 7 days, solution formulation) and multiple-dose (1, 2 and 4 mg twice daily for 7 days, solid dispersion formulation)] were analyzed. nonmem was used for model development and the model building steps were guided by pre-specified diagnostic and statistical criteria. The final model was qualified using non-parametric bootstrap and visual predictive check. Results The developed body temperature model included additive components of baseline, circadian rhythm (cosine function of time) and ABT-102 effect (Emax function of plasma concentration) with tolerance development (decrease in ABT-102 Emax over time). Type of body temperature measurement (oral vs. core) was included as a fixed effect on baseline, amplitude of circadian rhythm and residual error. The model estimates (95% bootstrap confidence interval) were: baseline oral body temperature, 36.3 (36.3, 36.4)°C; baseline core body temperature, 37.0 (37.0, 37.1)°C; oral circadian amplitude, 0.25 (0.22, 0.28)°C; core circadian amplitude, 0.31 (0.28, 0.34)°C; circadian phase shift, 7.6 (7.3, 7.9) h; ABT-102 Emax, 2.2 (1.9, 2.7)°C; ABT-102 EC50, 20 (15, 28) ng ml−1; tolerance T50, 28 (20, 43) h. Conclusions At exposures predicted to exert analgesic activity in humans, the effect of ABT-102 on body temperature is estimated to be 0.6 to 0.8°C. This effect attenuates within 2 to 3 days of dosing. PMID:22966986

  9. Baseline pressure errors (BPEs) extensively influence intracranial pressure scores: results of a prospective observational study

    PubMed Central

    2014-01-01

    Background Monitoring of intracranial pressure (ICP) is a cornerstone in the surveillance of neurosurgical patients. The ICP is measured against a baseline pressure (i.e. zero - or reference pressure). We have previously reported that baseline pressure errors (BPEs), manifested as spontaneous shift or drifts in baseline pressure, cause erroneous readings of mean ICP in individual patients. The objective of this study was to monitor the frequency and severity of BPEs. To this end, we performed a prospective, observational study monitoring the ICP from two separate ICP sensors (Sensors 1 and 2) placed in close proximity in the brain. We characterized BPEs as differences in mean ICP despite near to identical ICP waveform in Sensors 1 and 2. Methods The study enrolled patients with aneurysmal subarachnoid hemorrhage in need of continuous ICP monitoring as part of their intensive care management. The two sensors were placed close to each other in the brain parenchyma via the same burr hole. The monitoring was performed as long as needed from a clinical perspective and the ICP recordings were stored digitally for analysis. For every patient the mean ICP as well as the various ICP wave parameters of the two sensors were compared. Results Sixteen patients were monitored median 164 hours (ranges 70 – 364 hours). Major BPEs, as defined by marked differences in mean ICP despite similar ICP waveform, were seen in 9 of them (56%). The BPEs were of magnitudes that had the potential to alter patient management. Conclusions Baseline Pressure Errors (BPEs) occur in a significant number of patients undergoing continuous ICP monitoring and they may alter patient management. The current practice of measuring ICP against a baseline pressure does not comply with the concept of State of the Art. Monitoring of the ICP waves ought to become the new State of the Art as they are not influenced by BPEs. PMID:24472296

  10. Improving Thermal Dose Accuracy in Magnetic Resonance-Guided Focused Ultrasound Surgery: Long-Term Thermometry Using a Prior Baseline as a Reference

    PubMed Central

    Bitton, Rachel R.; Webb, Taylor D.; Pauly, Kim Butts; Ghanouni, Pejman

    2015-01-01

    Purpose To investigate thermal dose volume (TDV) and non-perfused volume (NPV) of magnetic resonance-guided focused ultrasound (MRgFUS) treatments in patients with soft tissue tumors, and describe a method for MR thermal dosimetry using a baseline reference. Materials and Methods Agreement between TDV and immediate post treatment NPV was evaluated from MRgFUS treatments of five patients with biopsy-proven desmoid tumors. Thermometry data (gradient echo, 3T) were analyzed over the entire course of the treatments to discern temperature errors in the standard approach. The technique searches previously acquired baseline images for a match using 2D normalized cross-correlation and a weighted mean of phase difference images. Thermal dose maps and TDVs were recalculated using the matched baseline and compared to NPV. Results TDV and NPV showed between 47%–91% disagreement, using the standard immediate baseline method for calculating TDV. Long-term thermometry showed a nonlinear local temperature accrual, where peak additional temperature varied between 4–13°C (mean = 7.8°C) across patients. The prior baseline method could be implemented by finding a previously acquired matching baseline 61% ± 8% (mean ± SD) of the time. We found 7%–42% of the disagreement between TDV and NPV was due to errors in thermometry caused by heat accrual. For all patients, the prior baseline method increased the estimated treatment volume and reduced the discrepancies between TDV and NPV (P = 0.023). Conclusion This study presents a mismatch between in-treatment and post treatment efficacy measures. The prior baseline approach accounts for local heating and improves the accuracy of thermal dose-predicted volume. PMID:26119129

  11. Improving thermal dose accuracy in magnetic resonance-guided focused ultrasound surgery: Long-term thermometry using a prior baseline as a reference.

    PubMed

    Bitton, Rachel R; Webb, Taylor D; Pauly, Kim Butts; Ghanouni, Pejman

    2016-01-01

    To investigate thermal dose volume (TDV) and non-perfused volume (NPV) of magnetic resonance-guided focused ultrasound (MRgFUS) treatments in patients with soft tissue tumors, and describe a method for MR thermal dosimetry using a baseline reference. Agreement between TDV and immediate post treatment NPV was evaluated from MRgFUS treatments of five patients with biopsy-proven desmoid tumors. Thermometry data (gradient echo, 3T) were analyzed over the entire course of the treatments to discern temperature errors in the standard approach. The technique searches previously acquired baseline images for a match using 2D normalized cross-correlation and a weighted mean of phase difference images. Thermal dose maps and TDVs were recalculated using the matched baseline and compared to NPV. TDV and NPV showed between 47%-91% disagreement, using the standard immediate baseline method for calculating TDV. Long-term thermometry showed a nonlinear local temperature accrual, where peak additional temperature varied between 4-13°C (mean = 7.8°C) across patients. The prior baseline method could be implemented by finding a previously acquired matching baseline 61% ± 8% (mean ± SD) of the time. We found 7%-42% of the disagreement between TDV and NPV was due to errors in thermometry caused by heat accrual. For all patients, the prior baseline method increased the estimated treatment volume and reduced the discrepancies between TDV and NPV (P = 0.023). This study presents a mismatch between in-treatment and post treatment efficacy measures. The prior baseline approach accounts for local heating and improves the accuracy of thermal dose-predicted volume. © 2015 Wiley Periodicals, Inc.

  12. Five-Year Progression of Refractive Errors and Incidence of Myopia in School-Aged Children in Western China

    PubMed Central

    Zhou, Wen-Jun; Zhang, Yong-Ye; Li, Hua; Wu, Yu-Fei; Xu, Ji; Lv, Sha; Li, Ge; Liu, Shi-Chun; Song, Sheng-Fang

    2016-01-01

    Background To determine the change in refractive error and the incidence of myopia among school-aged children in the Yongchuan District of Chongqing City, Western China. Methods A population-based cross-sectional survey was initially conducted in 2006 among 3070 children aged 6 to 15 years. A longitudinal follow-up study was then conducted 5 years later between November 2011 and March 2012. Refractive error was measured under cycloplegia with autorefraction. Age, sex, and baseline refractive error were evaluated as risk factors for progression of refractive error and incidence of myopia. Results Longitudinal data were available for 1858 children (60.5%). The cumulative mean change in refractive error was −2.21 (standard deviation [SD], 1.87) diopters (D) for the entire study population, with an annual progression of refraction in a myopic direction of −0.43 D. Myopic progression of refractive error was associated with younger age, female sex, and higher myopic or hyperopic refractive error at baseline. The cumulative incidence of myopia, defined as a spherical equivalent refractive error of −0.50 D or more, among initial emmetropes and hyperopes was 54.9% (95% confidence interval [CI], 45.2%–63.5%), with an annual incidence of 10.6% (95% CI, 8.7%–13.1%). Myopia was found more likely to happen in female and older children. Conclusions In Western China, both myopic progression and incidence of myopia were higher than those of children from most other locations in China and from the European Caucasian population. Compared with a previous study in China, there was a relative increase in annual myopia progression and annual myopia incidence, a finding which is consistent with the increasing trend on prevalence of myopia in China. PMID:26875599

  13. Five-Year Progression of Refractive Errors and Incidence of Myopia in School-Aged Children in Western China.

    PubMed

    Zhou, Wen-Jun; Zhang, Yong-Ye; Li, Hua; Wu, Yu-Fei; Xu, Ji; Lv, Sha; Li, Ge; Liu, Shi-Chun; Song, Sheng-Fang

    2016-07-05

    To determine the change in refractive error and the incidence of myopia among school-aged children in the Yongchuan District of Chongqing City, Western China. A population-based cross-sectional survey was initially conducted in 2006 among 3070 children aged 6 to 15 years. A longitudinal follow-up study was then conducted 5 years later between November 2011 and March 2012. Refractive error was measured under cycloplegia with autorefraction. Age, sex, and baseline refractive error were evaluated as risk factors for progression of refractive error and incidence of myopia. Longitudinal data were available for 1858 children (60.5%). The cumulative mean change in refractive error was -2.21 (standard deviation [SD], 1.87) diopters (D) for the entire study population, with an annual progression of refraction in a myopic direction of -0.43 D. Myopic progression of refractive error was associated with younger age, female sex, and higher myopic or hyperopic refractive error at baseline. The cumulative incidence of myopia, defined as a spherical equivalent refractive error of -0.50 D or more, among initial emmetropes and hyperopes was 54.9% (95% confidence interval [CI], 45.2%-63.5%), with an annual incidence of 10.6% (95% CI, 8.7%-13.1%). Myopia was found more likely to happen in female and older children. In Western China, both myopic progression and incidence of myopia were higher than those of children from most other locations in China and from the European Caucasian population. Compared with a previous study in China, there was a relative increase in annual myopia progression and annual myopia incidence, a finding which is consistent with the increasing trend on prevalence of myopia in China.

  14. Classification based upon gene expression data: bias and precision of error rates.

    PubMed

    Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L

    2007-06-01

    Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp

  15. Cognitive performance is associated with gray matter decline in first-episode psychosis.

    PubMed

    Dempster, Kara; Norman, Ross; Théberge, Jean; Densmore, Maria; Schaefer, Betsy; Williamson, Peter

    2017-06-30

    Progressive loss of gray matter has been demonstrated over the early course of schizophrenia. Identification of an association between cognition and gray matter may lead to development of early interventions directed at preserving gray matter volume and cognitive ability. The present study evaluated the association between gray matter using voxel-based morphometry (VBM) and cognitive testing in a sample of 16 patients with first-episode psychosis. A simple regression was applied to investigate the association between gray matter at baseline and 80 months and cognitive tests at baseline. Performance on the Wisconsin Card Sorting Task (WCST) at baseline was positively associated with gray matter volume in several brain regions. There was an association between decreased gray matter at baseline in the nucleus accumbens and Trails B errors. Performing worse on Trails B and making more WCST perseverative errors at baseline was associated with gray matter decline over 80 months in the right globus pallidus, left inferior parietal lobe, Brodmann's area (BA) 40, and left superior parietal lobule and BA 7 respectively. All significant findings were cluster corrected. The results support a relationship between aspects of cognitive impairment and gray matter abnormalities in first-episode psychosis. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  16. A New Calibration Method for Commercial RGB-D Sensors.

    PubMed

    Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu

    2017-05-24

    Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter‑level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges.

  17. Baseline Establishment Using Virtual Environment Traumatic Brain Injury Screen (VETS)

    DTIC Science & Technology

    2015-06-01

    indicator of mTBI. Further, these results establish a baseline data set, which may be useful in comparing concussed individuals. 14. SUBJECT TERMS... Concussion , mild traumatic brain injury (mTBI), traumatic brain injury (TBI), balance, Sensory Organization Test, Balance Error Scoring System, center of...43 5.2 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . 44 Appendix A Military Acute Concussion Evaluation 47

  18. Applying the intention-to-treat principle in practice: Guidance on handling randomisation errors

    PubMed Central

    Sullivan, Thomas R; Voysey, Merryn; Lee, Katherine J; Cook, Jonathan A; Forbes, Andrew B

    2015-01-01

    Background: The intention-to-treat principle states that all randomised participants should be analysed in their randomised group. The implications of this principle are widely discussed in relation to the analysis, but have received limited attention in the context of handling errors that occur during the randomisation process. The aims of this article are to (1) demonstrate the potential pitfalls of attempting to correct randomisation errors and (2) provide guidance on handling common randomisation errors when they are discovered that maintains the goals of the intention-to-treat principle. Methods: The potential pitfalls of attempting to correct randomisation errors are demonstrated and guidance on handling common errors is provided, using examples from our own experiences. Results: We illustrate the problems that can occur when attempts are made to correct randomisation errors and argue that documenting, rather than correcting these errors, is most consistent with the intention-to-treat principle. When a participant is randomised using incorrect baseline information, we recommend accepting the randomisation but recording the correct baseline data. If ineligible participants are inadvertently randomised, we advocate keeping them in the trial and collecting all relevant data but seeking clinical input to determine their appropriate course of management, unless they can be excluded in an objective and unbiased manner. When multiple randomisations are performed in error for the same participant, we suggest retaining the initial randomisation and either disregarding the second randomisation if only one set of data will be obtained for the participant, or retaining the second randomisation otherwise. When participants are issued the incorrect treatment at the time of randomisation, we propose documenting the treatment received and seeking clinical input regarding the ongoing treatment of the participant. Conclusion: Randomisation errors are almost inevitable and should be reported in trial publications. The intention-to-treat principle is useful for guiding responses to randomisation errors when they are discovered. PMID:26033877

  19. Applying the intention-to-treat principle in practice: Guidance on handling randomisation errors.

    PubMed

    Yelland, Lisa N; Sullivan, Thomas R; Voysey, Merryn; Lee, Katherine J; Cook, Jonathan A; Forbes, Andrew B

    2015-08-01

    The intention-to-treat principle states that all randomised participants should be analysed in their randomised group. The implications of this principle are widely discussed in relation to the analysis, but have received limited attention in the context of handling errors that occur during the randomisation process. The aims of this article are to (1) demonstrate the potential pitfalls of attempting to correct randomisation errors and (2) provide guidance on handling common randomisation errors when they are discovered that maintains the goals of the intention-to-treat principle. The potential pitfalls of attempting to correct randomisation errors are demonstrated and guidance on handling common errors is provided, using examples from our own experiences. We illustrate the problems that can occur when attempts are made to correct randomisation errors and argue that documenting, rather than correcting these errors, is most consistent with the intention-to-treat principle. When a participant is randomised using incorrect baseline information, we recommend accepting the randomisation but recording the correct baseline data. If ineligible participants are inadvertently randomised, we advocate keeping them in the trial and collecting all relevant data but seeking clinical input to determine their appropriate course of management, unless they can be excluded in an objective and unbiased manner. When multiple randomisations are performed in error for the same participant, we suggest retaining the initial randomisation and either disregarding the second randomisation if only one set of data will be obtained for the participant, or retaining the second randomisation otherwise. When participants are issued the incorrect treatment at the time of randomisation, we propose documenting the treatment received and seeking clinical input regarding the ongoing treatment of the participant. Randomisation errors are almost inevitable and should be reported in trial publications. The intention-to-treat principle is useful for guiding responses to randomisation errors when they are discovered. © The Author(s) 2015.

  20. The effect of memory and context changes on color matches to real objects.

    PubMed

    Allred, Sarah R; Olkkonen, Maria

    2015-07-01

    Real-world color identification tasks often require matching the color of objects between contexts and after a temporal delay, thus placing demands on both perceptual and memory processes. Although the mechanisms of matching colors between different contexts have been widely studied under the rubric of color constancy, little research has investigated the role of long-term memory in such tasks or how memory interacts with color constancy. To investigate this relationship, observers made color matches to real study objects that spanned color space, and we independently manipulated the illumination impinging on the objects, the surfaces in which objects were embedded, and the delay between seeing the study object and selecting its color match. Adding a 10-min delay increased both the bias and variability of color matches compared to a baseline condition. These memory errors were well accounted for by modeling memory as a noisy but unbiased version of perception constrained by the matching methods. Surprisingly, we did not observe significant increases in errors when illumination and surround changes were added to the 10-minute delay, although the context changes alone did elicit significant errors.

  1. Mindfulness-Based Stress Reduction in Post-treatment Breast Cancer Patients: Immediate and Sustained Effects Across Multiple Symptom Clusters.

    PubMed

    Reich, Richard R; Lengacher, Cecile A; Alinat, Carissa B; Kip, Kevin E; Paterson, Carly; Ramesar, Sophia; Han, Heather S; Ismail-Khan, Roohi; Johnson-Mallard, Versie; Moscoso, Manolete; Budhrani-Shani, Pinky; Shivers, Steve; Cox, Charles E; Goodman, Matthew; Park, Jong

    2017-01-01

    Breast cancer survivors (BCS) face adverse physical and psychological symptoms, often co-occurring. Biologic and psychological factors may link symptoms within clusters, distinguishable by prevalence and/or severity. Few studies have examined the effects of behavioral interventions or treatment of symptom clusters. The aim of this study was to identify symptom clusters among post-treatment BCS and determine symptom cluster improvement following the Mindfulness-Based Stress Reduction for Breast Cancer (MBSR(BC)) program. Three hundred twenty-two Stage 0-III post-treatment BCS were randomly assigned to either a six-week MBSR(BC) program or usual care. Psychological (depression, anxiety, stress, and fear of recurrence), physical (fatigue, pain, sleep, and drowsiness), and cognitive symptoms and quality of life were assessed at baseline, six, and 12 weeks, along with demographic and clinical history data at baseline. A three-step analytic process included the error-accounting models of factor analysis and structural equation modeling. Four symptom clusters emerged at baseline: pain, psychological, fatigue, and cognitive. From baseline to six weeks, the model demonstrated evidence of MBSR(BC) effectiveness in both the psychological (anxiety, depression, perceived stress and QOL, emotional well-being) (P = 0.007) and fatigue (fatigue, sleep, and drowsiness) (P < 0.001) clusters. Results between six and 12 weeks showed sustained effects, but further improvement was not observed. Our results provide clinical effectiveness evidence that MBSR(BC) works to improve symptom clusters, particularly for psychological and fatigue symptom clusters, with the greatest improvement occurring during the six-week program with sustained effects for several weeks after MBSR(BC) training. Name and URL of Registry: ClinicalTrials.gov. Registration number: NCT01177124. Copyright © 2016. Published by Elsevier Inc.

  2. A meta-analysis of inhibitory-control deficits in patients diagnosed with Alzheimer's dementia.

    PubMed

    Kaiser, Anna; Kuhlmann, Beatrice G; Bosnjak, Michael

    2018-05-10

    The authors conducted meta-analyses to determine the magnitude of performance impairments in patients diagnosed with Alzheimer's dementia (AD) compared with healthy aging (HA) controls on eight tasks commonly used to measure inhibitory control. Response time (RT) and error rates from a total of 64 studies were analyzed with random-effects models (overall effects) and mixed-effects models (moderator analyses). Large differences between AD patients and HA controls emerged in the basic inhibition conditions of many of the tasks with AD patients often performing slower, overall d = 1.17, 95% CI [0.88-1.45], and making more errors, d = 0.83 [0.63-1.03]. However, comparably large differences were also present in performance on many of the baseline control-conditions, d = 1.01 [0.83-1.19] for RTs and d = 0.44 [0.19-0.69] for error rates. A standardized derived inhibition score (i.e., control-condition score - inhibition-condition score) suggested no significant mean group difference for RTs, d = -0.07 [-0.22-0.08], and only a small difference for errors, d = 0.24 [-0.12-0.60]. Effects systematically varied across tasks and with AD severity. Although the error rate results suggest a specific deterioration of inhibitory-control abilities in AD, further processes beyond inhibitory control (e.g., a general reduction in processing speed and other, task-specific attentional processes) appear to contribute to AD patients' performance deficits observed on a variety of inhibitory-control tasks. Nonetheless, the inhibition conditions of many of these tasks well discriminate between AD patients and HA controls. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data

    PubMed Central

    Ying, Gui-shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard

    2017-01-01

    Purpose To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. Methods We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field data in the elderly. Results When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI −0.03 to 0.32D, P=0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, P=0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller P-values, while analysis of the worse eye provided larger P-values than mixed effects models and marginal models. Conclusion In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision. PMID:28102741

  4. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  5. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  6. Accounting for dropout bias using mixed-effects models.

    PubMed

    Mallinckrodt, C H; Clark, W S; David, S R

    2001-01-01

    Treatment effects are often evaluated by comparing change over time in outcome measures. However, valid analyses of longitudinal data can be problematic when subjects discontinue (dropout) prior to completing the study. This study assessed the merits of likelihood-based repeated measures analyses (MMRM) compared with fixed-effects analysis of variance where missing values were imputed using the last observation carried forward approach (LOCF) in accounting for dropout bias. Comparisons were made in simulated data and in data from a randomized clinical trial. Subject dropout was introduced in the simulated data to generate ignorable and nonignorable missingness. Estimates of treatment group differences in mean change from baseline to endpoint from MMRM were, on average, markedly closer to the true value than estimates from LOCF in every scenario simulated. Standard errors and confidence intervals from MMRM accurately reflected the uncertainty of the estimates, whereas standard errors and confidence intervals from LOCF underestimated uncertainty.

  7. Flight Technical Error Analysis of the SATS Higher Volume Operations Simulation and Flight Experiments

    NASA Technical Reports Server (NTRS)

    Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.

    2005-01-01

    This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.

  8. Driving Intervention for Returning Combat Veterans.

    PubMed

    Classen, Sherrilene; Winter, Sandra; Monahan, Miriam; Yarney, Abraham; Link Lutz, Amanda; Platek, Kyle; Levy, Charles

    2017-04-01

    Increased crash incidence following deployment and veterans' reports of driving difficulty spurred traffic safety research for this population. We conducted an interim analysis on the efficacy of a simulator-based occupational therapy driving intervention (OT-DI) compared with traffic safety education (TSE) in a randomized controlled trial. During baseline and post-testing, OT-Driver Rehabilitation Specialists and one OT-Certified Driver Rehabilitation Specialist measured driving performance errors on a DriveSafety CDS-250 high-fidelity simulator. The intervention group ( n = 13) received three OT-DI sessions addressing driving errors and visual-search retraining. The control group ( n = 13) received three TSE sessions addressing personal factors and defensive driving. Based on Wilcoxon rank-sum analysis, the OT-DI group's errors were significantly reduced when comparing baseline with Post-Test 1 ( p < .0001) and comparing the OT-DI group with the TSE group at Post-Test 1 ( p = .01). These findings provide support for the efficacy of the OT-DI and set the stage for a future effectiveness study.

  9. Error baseline rates of five sample preparation methods used to characterize RNA virus populations.

    PubMed

    Kugelman, Jeffrey R; Wiley, Michael R; Nagle, Elyse R; Reyes, Daniel; Pfeffer, Brad P; Kuhn, Jens H; Sanchez-Lockhart, Mariano; Palacios, Gustavo F

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA) as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5) of all compared methods.

  10. Subnanosecond GPS-based clock synchronization and precision deep-space tracking

    NASA Technical Reports Server (NTRS)

    Dunn, C. E.; Lichten, S. M.; Jefferson, D. C.; Border, J. S.

    1992-01-01

    Interferometric spacecraft tracking is accomplished by the Deep Space Network (DSN) by comparing the arrival time of electromagnetic spacecraft signals at ground antennas separated by baselines on the order of 8000 km. Clock synchronization errors within and between DSN stations directly impact the attainable tracking accuracy, with a 0.3-nsec error in clock synchronization resulting in an 11-nrad angular position error. This level of synchronization is currently achieved by observing a quasar which is angularly close to the spacecraft just after the spacecraft observations. By determining the differential arrival times of the random quasar signal at the stations, clock offsets and propagation delays within the atmosphere and within the DSN stations are calibrated. Recent developments in time transfer techniques may allow medium accuracy (50-100 nrad) spacecraft tracking without near-simultaneous quasar-based calibrations. Solutions are presented for a worldwide network of Global Positioning System (GPS) receivers in which the formal errors for DSN clock offset parameters are less than 0.5 nsec. Comparisons of clock rate offsets derived from GPS measurements and from very long baseline interferometry (VLBI), as well as the examination of clock closure, suggest that these formal errors are a realistic measure of GPS-based clock offset precision and accuracy. Incorporating GPS-based clock synchronization measurements into a spacecraft differential ranging system would allow tracking without near-simultaneous quasar observations. The impact on individual spacecraft navigation-error sources due to elimination of quasar-based calibrations is presented. System implementation, including calibration of station electronic delays, is discussed.

  11. Sub-nanosecond clock synchronization and precision deep space tracking

    NASA Technical Reports Server (NTRS)

    Dunn, Charles; Lichten, Stephen; Jefferson, David; Border, James S.

    1992-01-01

    Interferometric spacecraft tracking is accomplished at the NASA Deep Space Network (DSN) by comparing the arrival time of electromagnetic spacecraft signals to ground antennas separated by baselines on the order of 8000 km. Clock synchronization errors within and between DSN stations directly impact the attainable tracking accuracy, with a 0.3 ns error in clock synchronization resulting in an 11 nrad angular position error. This level of synchronization is currently achieved by observing a quasar which is angularly close to the spacecraft just after the spacecraft observations. By determining the differential arrival times of the random quasar signal at the stations, clock synchronization and propagation delays within the atmosphere and within the DSN stations are calibrated. Recent developments in time transfer techniques may allow medium accuracy (50-100 nrad) spacecraft observations without near-simultaneous quasar-based calibrations. Solutions are presented for a global network of GPS receivers in which the formal errors in clock offset parameters are less than 0.5 ns. Comparisons of clock rate offsets derived from GPS measurements and from very long baseline interferometry and the examination of clock closure suggest that these formal errors are a realistic measure of GPS-based clock offset precision and accuracy. Incorporating GPS-based clock synchronization measurements into a spacecraft differential ranging system would allow tracking without near-simultaneous quasar observations. The impact on individual spacecraft navigation error sources due to elimination of quasar-based calibrations is presented. System implementation, including calibration of station electronic delays, is discussed.

  12. Error baseline rates of five sample preparation methods used to characterize RNA virus populations

    PubMed Central

    Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717

  13. Some tests of wet tropospheric calibration for the CASA Uno Global Positioning System experiment

    NASA Technical Reports Server (NTRS)

    Dixon, T. H.; Wolf, S. Kornreich

    1990-01-01

    Wet tropospheric path delay can be a major error source for Global Positioning System (GPS) geodetic experiments. Strategies for minimizing this error are investigted using data from CASA Uno, the first major GPS experiment in Central and South America, where wet path delays may be both high and variable. Wet path delay calibration using water vapor radiometers (WVRs) and residual delay estimation is compared with strategies where the entire wet path delay is estimated stochastically without prior calibration, using data from a 270-km test baseline in Costa Rica. Both approaches yield centimeter-level baseline repeatability and similar tropospheric estimates, suggesting that WVR calibration is not critical for obtaining high precision results with GPS in the CASA region.

  14. Neutrinos help reconcile Planck measurements with the local universe.

    PubMed

    Wyman, Mark; Rudd, Douglas H; Vanderveld, R Ali; Hu, Wayne

    2014-02-07

    Current measurements of the low and high redshift Universe are in tension if we restrict ourselves to the standard six-parameter model of flat ΛCDM. This tension has two parts. First, the Planck satellite data suggest a higher normalization of matter perturbations than local measurements of galaxy clusters. Second, the expansion rate of the Universe today, H0, derived from local distance-redshift measurements is significantly higher than that inferred using the acoustic scale in galaxy surveys and the Planck data as a standard ruler. The addition of a sterile neutrino species changes the acoustic scale and brings the two into agreement; meanwhile, adding mass to the active neutrinos or to a sterile neutrino can suppress the growth of structure, bringing the cluster data into better concordance as well. For our fiducial data set combination, with statistical errors for clusters, a model with a massive sterile neutrino shows 3.5σ evidence for a nonzero mass and an even stronger rejection of the minimal model. A model with massive active neutrinos and a massless sterile neutrino is similarly preferred. An eV-scale sterile neutrino mass--of interest for short baseline and reactor anomalies--is well within the allowed range. We caution that (i) unknown astrophysical systematic errors in any of the data sets could weaken this conclusion, but they would need to be several times the known errors to eliminate the tensions entirely; (ii) the results we find are at some variance with analyses that do not include cluster measurements; and (iii) some tension remains among the data sets even when new neutrino physics is included.

  15. Integrated Modeling Activities for the James Webb Space Telescope: Structural-Thermal-Optical Analysis

    NASA Technical Reports Server (NTRS)

    Johnston, John D.; Howard, Joseph M.; Mosier, Gary E.; Parrish, Keith A.; McGinnis, Mark A.; Bluth, Marcel; Kim, Kevin; Ha, Kong Q.

    2004-01-01

    The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2011. This is a continuation of a series of papers on modeling activities for JWST. The structural-thermal-optical, often referred to as STOP, analysis process is used to predict the effect of thermal distortion on optical performance. The benchmark STOP analysis for JWST assesses the effect of an observatory slew on wavefront error. Temperatures predicted using geometric and thermal math models are mapped to a structural finite element model in order to predict thermally induced deformations. Motions and deformations at optical surfaces are then input to optical models, and optical performance is predicted using either an optical ray trace or a linear optical analysis tool. In addition to baseline performance predictions, a process for performing sensitivity studies to assess modeling uncertainties is described.

  16. Sensitivity of Magnetospheric Multi-Scale (MMS) Mission Navigation Accuracy to Major Error Sources

    NASA Technical Reports Server (NTRS)

    Olson, Corwin; Long, Anne; Car[emter. Russell

    2011-01-01

    The Magnetospheric Multiscale (MMS) mission consists of four satellites flying in formation in highly elliptical orbits about the Earth, with a primary objective of studying magnetic reconnection. The baseline navigation concept is independent estimation of each spacecraft state using GPS pseudorange measurements referenced to an Ultra Stable Oscillator (USO) with accelerometer measurements included during maneuvers. MMS state estimation is performed onboard each spacecraft using the Goddard Enhanced Onboard Navigation System (GEONS), which is embedded in the Navigator GPS receiver. This paper describes the sensitivity of MMS navigation performance to two major error sources: USO clock errors and thrust acceleration knowledge errors.

  17. Sensitivity of Magnetospheric Multi-Scale (MMS) Mission Naviation Accuracy to Major Error Sources

    NASA Technical Reports Server (NTRS)

    Olson, Corwin; Long, Anne; Carpenter, J. Russell

    2011-01-01

    The Magnetospheric Multiscale (MMS) mission consists of four satellites flying in formation in highly elliptical orbits about the Earth, with a primary objective of studying magnetic reconnection. The baseline navigation concept is independent estimation of each spacecraft state using GPS pseudorange measurements referenced to an Ultra Stable Oscillator (USO) with accelerometer measurements included during maneuvers. MMS state estimation is performed onboard each spacecraft using the Goddard Enhanced Onboard Navigation System (GEONS), which is embedded in the Navigator GPS receiver. This paper describes the sensitivity of MMS navigation performance to two major error sources: USO clock errors and thrust acceleration knowledge errors.

  18. The RMI Space Weather and Navigation Systems (SWANS) Project

    NASA Astrophysics Data System (ADS)

    Warnant, Rene; Lejeune, Sandrine; Wautelet, Gilles; Spits, Justine; Stegen, Koen; Stankov, Stan

    The SWANS (Space Weather and Navigation Systems) research and development project (http://swans.meteo.be) is an initiative of the Royal Meteorological Institute (RMI) under the auspices of the Belgian Solar-Terrestrial Centre of Excellence (STCE). The RMI SWANS objectives are: research on space weather and its effects on GNSS applications; permanent mon-itoring of the local/regional geomagnetic and ionospheric activity; and development/operation of relevant nowcast, forecast, and alert services to help professional GNSS/GALILEO users in mitigating space weather effects. Several SWANS developments have already been implemented and available for use. The K-LOGIC (Local Operational Geomagnetic Index K Calculation) system is a nowcast system based on a fully automated computer procedure for real-time digital magnetogram data acquisition, data screening, and calculating the local geomagnetic K index. Simultaneously, the planetary Kp index is estimated from solar wind measurements, thus adding to the service reliability and providing forecast capabilities as well. A novel hybrid empirical model, based on these ground-and space-based observations, has been implemented for nowcasting and forecasting the geomagnetic index, issuing also alerts whenever storm-level activity is indicated. A very important feature of the nowcast/forecast system is the strict control on the data input and processing, allowing for an immediate assessment of the output quality. The purpose of the LIEDR (Local Ionospheric Electron Density Reconstruction) system is to acquire and process data from simultaneous ground-based GNSS TEC and digital ionosonde measurements, and subsequently to deduce the vertical electron density distribution. A key module is the real-time estimation of the ionospheric slab thickness, offering additional infor-mation on the local ionospheric dynamics. The RTK (Real Time Kinematic) status mapping provides a quick look at the small-scale ionospheric effects on the RTK precision for several GPS stations in Belgium. The service assesses the effect of small-scale ionospheric irregularities by monitoring the high-frequency TEC rate of change at any given station. This assessment results in a (colour) code assigned to each station, code ranging from "quiet" (green) to "extreme" (red) and referring to the local ionospheric conditions. Alerts via e-mail are sent to subscribed users when disturbed conditions are observed. SoDIPE (Software for Determining the Ionospheric Positioning Error) estimates the position-ing error due to the ionospheric conditions only (called "ionospheric error") in high-precision positioning applications (RTK in particular). For each of the Belgian Active Geodetic Network (AGN) baselines, SoDIPE computes the ionospheric error and its median value (every 15 min-utes). Again, a (colour) code is assigned to each baseline, ranging from "nominal" (green) to "extreme" (red) error level. Finally, all available baselines (drawn in colour corresponding to error level) are displayed on a map of Belgium. The future SWANS work will focus on regional ionospheric monitoring and developing various other nowcast and forecast services.

  19. Determining dynamical parameters of the Milky Way Galaxy based on high-accuracy radio astrometry

    NASA Astrophysics Data System (ADS)

    Honma, Mareki; Nagayama, Takumi; Sakai, Nobuyuki

    2015-08-01

    In this paper we evaluate how the dynamical structure of the Galaxy can be constrained by high-accuracy VLBI (Very Long Baseline Interferometry) astrometry such as VERA (VLBI Exploration of Radio Astrometry). We generate simulated samples of maser sources which follow the gas motion caused by a spiral or bar potential, with their distribution similar to those currently observed with VERA and VLBA (Very Long Baseline Array). We apply the Markov chain Monte Carlo analyses to the simulated sample sources to determine the dynamical parameter of the models. We show that one can successfully determine the initial model parameters if astrometric results are obtained for a few hundred sources with currently achieved astrometric accuracy. If astrometric data are available from 500 sources, the expected accuracy of R0 and Θ0 is ˜ 1% or higher, and parameters related to the spiral structure can be constrained by an error of 10% or with higher accuracy. We also show that the parameter determination accuracy is basically independent of the locations of resonances such as corotation and/or inner/outer Lindblad resonances. We also discuss the possibility of model selection based on the Bayesian information criterion (BIC), and demonstrate that BIC can be used to discriminate different dynamical models of the Galaxy.

  20. Asynchronous RTK precise DGNSS positioning method for deriving a low-latency high-rate output

    NASA Astrophysics Data System (ADS)

    Liang, Zhang; Hanfeng, Lv; Dingjie, Wang; Yanqing, Hou; Jie, Wu

    2015-07-01

    Low-latency high-rate (1 Hz) precise real-time kinematic (RTK) can be applied in high-speed scenarios such as aircraft automatic landing, precise agriculture and intelligent vehicle. The classic synchronous RTK (SRTK) precise differential GNSS (DGNSS) positioning technology, however, is not able to obtain a low-latency high-rate output for the rover receiver because of long data link transmission time delays (DLTTD) from the reference receiver. To overcome the long DLTTD, this paper proposes an asynchronous real-time kinematic (ARTK) method using asynchronous observations from two receivers. The asynchronous observation model (AOM) is developed based on undifferenced carrier phase observation equations of the two receivers at different epochs with short baseline. The ephemeris error and atmosphere delay are the possible main error sources on positioning accuracy in this model, and they are analyzed theoretically. In a short DLTTD and during a period of quiet ionosphere activity, the main error sources decreasing positioning accuracy are satellite orbital errors: the "inverted ephemeris error" and the integration of satellite velocity error which increase linearly along with DLTTD. The cycle slip of asynchronous double-differencing carrier phase is detected by TurboEdit method and repaired by the additional ambiguity parameter method. The AOM can deal with synchronous observation model (SOM) and achieve precise positioning solution with synchronous observations as well, since the SOM is only a specific case of AOM. The proposed method not only can reduce the cost of data collection and transmission, but can also support the mobile phone network data link transfer mode for the data of the reference receiver. This method can avoid data synchronizing process besides ambiguity initialization step, which is very convenient for real-time navigation of vehicles. The static and kinematic experiment results show that this method achieves 20 Hz or even higher rate output in real time. The ARTK positioning accuracy is better and more robust than the combination of phase difference over time (PDOT) and SRTK method at a high rate. The ARTK positioning accuracy is equivalent to SRTK solution when the DLTTD is 0.5 s, and centimeter level accuracy can be achieved even when DLTTD is 15 s.

  1. Estimation of reliability of predictions and model applicability domain evaluation in the analysis of acute toxicity (LD50).

    PubMed

    Sazonovas, A; Japertas, P; Didziapetris, R

    2010-01-01

    This study presents a new type of acute toxicity (LD(50)) prediction that enables automated assessment of the reliability of predictions (which is synonymous with the assessment of the Model Applicability Domain as defined by the Organization for Economic Cooperation and Development). Analysis involved nearly 75,000 compounds from six animal systems (acute rat toxicity after oral and intraperitoneal administration; acute mouse toxicity after oral, intraperitoneal, intravenous, and subcutaneous administration). Fragmental Partial Least Squares (PLS) with 100 bootstraps yielded baseline predictions that were automatically corrected for non-linear effects in local chemical spaces--a combination called Global, Adjusted Locally According to Similarity (GALAS) modelling methodology. Each prediction obtained in this manner is provided with a reliability index value that depends on both compound's similarity to the training set (that accounts for similar trends in LD(50) variations within multiple bootstraps) and consistency of experimental results with regard to the baseline model in the local chemical environment. The actual performance of the Reliability Index (RI) was proven by its good (and uniform) correlations with Root Mean Square Error (RMSE) in all validation sets, thus providing quantitative assessment of the Model Applicability Domain. The obtained models can be used for compound screening in the early stages of drug development and prioritization for experimental in vitro testing or later in vivo animal acute toxicity studies.

  2. Geodesy by radio interferometry - Water vapor radiometry for estimation of the wet delay

    NASA Technical Reports Server (NTRS)

    Elgered, G.; Davis, J. L.; Herring, T. A.; Shapiro, I. I.

    1991-01-01

    An important source of error in VLBI estimates of baseline length is unmodeled variations of the refractivity of the neutral atmosphere along the propagation path of the radio signals. This paper presents and discusses the method of using data from a water vapor radiomete (WVR) to correct for the propagation delay caused by atmospheric water vapor, the major cause of these variations. Data from different WVRs are compared with estimated propagation delays obtained by Kalman filtering of the VLBI data themselves. The consequences of using either WVR data or Kalman filtering to correct for atmospheric propagation delay at the Onsala VLBI site are investigated by studying the repeatability of estimated baseline lengths from Onsala to several other sites. The repeatability obtained for baseline length estimates shows that the methods of water vapor radiometry and Kalman filtering offer comparable accuracies when applied to VLBI observations obtained in the climate of the Swedish west coast. For the most frequently measured baseline in this study, the use of WVR data yielded a 13 percent smaller weighted-root-mean-square (WRMS) scatter of the baseline length estimates compared to the use of a Kalman filter. It is also clear that the 'best' minimum elevationi angle for VLBI observations depends on the accuracy of the determinations of the total propagation delay to be used, since the error in this delay increases with increasing air mass.

  3. Stillwater Hybrid Geo-Solar Power Plant Optimization Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Daniel S.; Mines, Gregory L.; Turchi, Craig S.

    2015-09-02

    The Stillwater Power Plant is the first hybrid plant in the world able to bring together a medium-enthalpy geothermal unit with solar thermal and solar photovoltaic systems. Solar field and power plant models have been developed to predict the performance of the Stillwater geothermal / solar-thermal hybrid power plant. The models have been validated using operational data from the Stillwater plant. A preliminary effort to optimize performance of the Stillwater hybrid plant using optical characterization of the solar field has been completed. The Stillwater solar field optical characterization involved measurement of mirror reflectance, mirror slope error, and receiver position error.more » The measurements indicate that the solar field may generate 9% less energy than the design value if an appropriate tracking offset is not employed. A perfect tracking offset algorithm may be able to boost the solar field performance by about 15%. The validated Stillwater hybrid plant models were used to evaluate hybrid plant operating strategies including turbine IGV position optimization, ACC fan speed and turbine IGV position optimization, turbine inlet entropy control using optimization of multiple process variables, and mixed working fluid substitution. The hybrid plant models predict that each of these operating strategies could increase net power generation relative to the baseline Stillwater hybrid plant operations.« less

  4. Comparison of Test and Finite Element Analysis for Two Full-Scale Helicopter Crash Tests

    NASA Technical Reports Server (NTRS)

    Annett, Martin S.; Horta,Lucas G.

    2011-01-01

    Finite element analyses have been performed for two full-scale crash tests of an MD-500 helicopter. The first crash test was conducted to evaluate the performance of a composite deployable energy absorber under combined flight loads. In the second crash test, the energy absorber was removed to establish the baseline loads. The use of an energy absorbing device reduced the impact acceleration levels by a factor of three. Accelerations and kinematic data collected from the crash tests were compared to analytical results. Details of the full-scale crash tests and development of the system-integrated finite element model are briefly described along with direct comparisons of acceleration magnitudes and durations for the first full-scale crash test. Because load levels were significantly different between tests, models developed for the purposes of predicting the overall system response with external energy absorbers were not adequate under more severe conditions seen in the second crash test. Relative error comparisons were inadequate to guide model calibration. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used for the second full-scale crash test. The calibrated parameter set reduced 2-norm prediction error by 51% but did not improve impact shape orthogonality.

  5. A New Calibration Method for Commercial RGB-D Sensors

    PubMed Central

    Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu

    2017-01-01

    Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter-level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges. PMID:28538695

  6. Digital stereo photogrammetry for grain-scale monitoring of fluvial surfaces: Error evaluation and workflow optimisation

    NASA Astrophysics Data System (ADS)

    Bertin, Stephane; Friedrich, Heide; Delmas, Patrice; Chan, Edwin; Gimel'farb, Georgy

    2015-03-01

    Grain-scale monitoring of fluvial morphology is important for the evaluation of river system dynamics. Significant progress in remote sensing and computer performance allows rapid high-resolution data acquisition, however, applications in fluvial environments remain challenging. Even in a controlled environment, such as a laboratory, the extensive acquisition workflow is prone to the propagation of errors in digital elevation models (DEMs). This is valid for both of the common surface recording techniques: digital stereo photogrammetry and terrestrial laser scanning (TLS). The optimisation of the acquisition process, an effective way to reduce the occurrence of errors, is generally limited by the use of commercial software. Therefore, the removal of evident blunders during post processing is regarded as standard practice, although this may introduce new errors. This paper presents a detailed evaluation of a digital stereo-photogrammetric workflow developed for fluvial hydraulic applications. The introduced workflow is user-friendly and can be adapted to various close-range measurements: imagery is acquired with two Nikon D5100 cameras and processed using non-proprietary "on-the-job" calibration and dense scanline-based stereo matching algorithms. Novel ground truth evaluation studies were designed to identify the DEM errors, which resulted from a combination of calibration errors, inaccurate image rectifications and stereo-matching errors. To ensure optimum DEM quality, we show that systematic DEM errors must be minimised by ensuring a good distribution of control points throughout the image format during calibration. DEM quality is then largely dependent on the imagery utilised. We evaluated the open access multi-scale Retinex algorithm to facilitate the stereo matching, and quantified its influence on DEM quality. Occlusions, inherent to any roughness element, are still a major limiting factor to DEM accuracy. We show that a careful selection of the camera-to-object and baseline distance reduces errors in occluded areas and that realistic ground truths help to quantify those errors.

  7. Measuring continuous baseline covariate imbalances in clinical trial data

    PubMed Central

    Ciolino, Jody D.; Martin, Renee’ H.; Zhao, Wenle; Hill, Michael D.; Jauch, Edward C.; Palesch, Yuko Y.

    2014-01-01

    This paper presents and compares several methods of measuring continuous baseline covariate imbalance in clinical trial data. Simulations illustrate that though the t-test is an inappropriate method of assessing continuous baseline covariate imbalance, the test statistic itself is a robust measure in capturing imbalance in continuous covariate distributions. Guidelines to assess effects of imbalance on bias, type I error rate, and power for hypothesis test for treatment effect on continuous outcomes are presented, and the benefit of covariate-adjusted analysis (ANCOVA) is also illustrated. PMID:21865270

  8. Stability Evaluation of Buildings in Urban Area Using Persistent Scatterer Interfometry -Focused on Thermal Expansion Effect

    NASA Astrophysics Data System (ADS)

    Choi, J. H.; Kim, S. W.; Won, J. S.

    2017-12-01

    The objective of this study is monitoring and evaluating the stability of buildings in Seoul, Korea. This study includes both algorithm development and application to a case study. The development focuses on improving the PSI approach for discriminating various geophysical phase components and separating them from the target displacement phase. A thermal expansion is one of the key components that make it difficult for precise displacement measurement. The core idea is to optimize the thermal expansion factor using air temperature data and to model the corresponding phase by fitting the residual phase. We used TerraSAR-X SAR data acquired over two years from 2011 to 2013 in Seoul, Korea. The temperature fluctuation according to seasons is considerably high in Seoul, Korea. Other problem is the highly-developed skyscrapers in Seoul, which seriously contribute to DEM errors. To avoid a high computational burden and unstable solution of the nonlinear equation due to unknown parameters (a thermal expansion parameter as well as two conventional parameters: linear velocity and DEM errors), we separate a phase model into two main steps as follows. First, multi-baseline pairs with very short time interval in which deformation components and thermal expansion can be negligible were used to estimate DEM errors first. Second, single-baseline pairs were used to estimate two remaining parameters, linear deformation rate and thermal expansion. The thermal expansion of buildings closely correlate with the seasonal temperature fluctuation. Figure 1 shows deformation patterns of two selected buildings in Seoul. In the figures of left column (Figure 1), it is difficult to observe the true ground subsidence due to a large cyclic pattern caused by thermal dilation of the buildings. The thermal dilation often mis-leads the results into wrong conclusions. After the correction by the proposed method, true ground subsidence was able to be precisely measured as in the bottom right figure in Figure 1. The results demonstrate how the thermal expansion phase blinds the time-series measurement of ground motion and how well the proposed approach able to remove the noise phases caused by thermal expansion and DEM errors. Some of the detected displacements matched well with the pre-reported events, such as ground subsidence and sinkhole.

  9. Constraints on a scale-dependent bias from galaxy clustering

    NASA Astrophysics Data System (ADS)

    Amendola, L.; Menegoni, E.; Di Porto, C.; Corsi, M.; Branchini, E.

    2017-01-01

    We forecast the future constraints on scale-dependent parametrizations of galaxy bias and their impact on the estimate of cosmological parameters from the power spectrum of galaxies measured in a spectroscopic redshift survey. For the latter we assume a wide survey at relatively large redshifts, similar to the planned Euclid survey, as the baseline for future experiments. To assess the impact of the bias we perform a Fisher matrix analysis, and we adopt two different parametrizations of scale-dependent bias. The fiducial models for galaxy bias are calibrated using mock catalogs of H α emitting galaxies mimicking the expected properties of the objects that will be targeted by the Euclid survey. In our analysis we have obtained two main results. First of all, allowing for a scale-dependent bias does not significantly increase the errors on the other cosmological parameters apart from the rms amplitude of density fluctuations, σ8 , and the growth index γ , whose uncertainties increase by a factor up to 2, depending on the bias model adopted. Second, we find that the accuracy in the linear bias parameter b0 can be estimated to within 1%-2% at various redshifts regardless of the fiducial model. The nonlinear bias parameters have significantly large errors that depend on the model adopted. Despite this, in the more realistic scenarios departures from the simple linear bias prescription can be detected with a ˜2 σ significance at each redshift explored. Finally, we use the Fisher matrix formalism to assess the impact od assuming an incorrect bias model and find that the systematic errors induced on the cosmological parameters are similar or even larger than the statistical ones.

  10. Adverse effects in dual-feed interferometry

    NASA Astrophysics Data System (ADS)

    Colavita, M. Mark

    2009-11-01

    Narrow-angle dual-star interferometric astrometry can provide very high accuracy in the presence of the Earth's turbulent atmosphere. However, to exploit the high atmospherically-limited accuracy requires control of systematic errors in measurement of the interferometer baseline, internal OPDs, and fringe phase. In addition, as high photometric SNR is required, care must be taken to maximize throughput and coherence to obtain high accuracy on faint stars. This article reviews the key aspects of the dual-star approach and implementation, the main contributors to the systematic error budget, and the coherence terms in the photometric error budget.

  11. Evaluating software development by analysis of changes: The data from the software engineering laboratory

    NASA Technical Reports Server (NTRS)

    1982-01-01

    An effective data collection methodology for evaluating software development methodologies was applied to four different software development projects. Goals of the data collection included characterizing changes and errors, characterizing projects and programmers, identifying effective error detection and correction techniques, and investigating ripple effects. The data collected consisted of changes (including error corrections) made to the software after code was written and baselined, but before testing began. Data collection and validation were concurrent with software development. Changes reported were verified by interviews with programmers.

  12. Multi-scale Characterization and Modeling of Surface Slope Probability Distribution for ~20-km Diameter Lunar Craters

    NASA Astrophysics Data System (ADS)

    Mahanti, P.; Robinson, M. S.; Boyd, A. K.

    2013-12-01

    Craters ~20-km diameter and above significantly shaped the lunar landscape. The statistical nature of the slope distribution on their walls and floors dominate the overall slope distribution statistics for the lunar surface. Slope statistics are inherently useful for characterizing the current topography of the surface, determining accurate photometric and surface scattering properties, and in defining lunar surface trafficability [1-4]. Earlier experimental studies on the statistical nature of lunar surface slopes were restricted either by resolution limits (Apollo era photogrammetric studies) or by model error considerations (photoclinometric and radar scattering studies) where the true nature of slope probability distribution was not discernible at baselines smaller than a kilometer[2,3,5]. Accordingly, historical modeling of lunar surface slopes probability distributions for applications such as in scattering theory development or rover traversability assessment is more general in nature (use of simple statistical models such as the Gaussian distribution[1,2,5,6]). With the advent of high resolution, high precision topographic models of the Moon[7,8], slopes in lunar craters can now be obtained at baselines as low as 6-meters allowing unprecedented multi-scale (multiple baselines) modeling possibilities for slope probability distributions. Topographic analysis (Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) 2-m digital elevation models (DEM)) of ~20-km diameter Copernican lunar craters revealed generally steep slopes on interior walls (30° to 36°, locally exceeding 40°) over 15-meter baselines[9]. In this work, we extend the analysis from a probability distribution modeling point-of-view with NAC DEMs to characterize the slope statistics for the floors and walls for the same ~20-km Copernican lunar craters. The difference in slope standard deviations between the Gaussian approximation and the actual distribution (2-meter sampling) was computed over multiple scales. This slope analysis showed that local slope distributions are non-Gaussian for both crater walls and floors. Over larger baselines (~100 meters), crater wall slope probability distributions do approximate Gaussian distributions better, but have long distribution tails. Crater floor probability distributions however, were always asymmetric (for the baseline scales analyzed) and less affected by baseline scale variations. Accordingly, our results suggest that use of long tailed probability distributions (like Cauchy) and a baseline-dependant multi-scale model can be more effective in describing the slope statistics for lunar topography. Refrences: [1]Moore, H.(1971), JGR,75(11) [2]Marcus, A. H.(1969),JGR,74 (22).[3]R.J. Pike (1970),U.S. Geological Survey Working Paper [4]N. C. Costes, J. E. Farmer and E. B. George (1972),NASA Technical Report TR R-401 [5]M. N. Parker and G. L. Tyler(1973), Radio Science, 8(3),177-184 [6]Alekseev, V. A.et al (1968), Soviet Astronomy, Vol. 11, p.860 [7]Burns et al. (2012) Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XXXIX-B4, 483-488.[8]Smith et al. (2010) GRL 37, L18204, DOI: 10.1029/2010GL043751. [9]Wagner R., Robinson, M., Speyerer E., Mahanti, P., LPSC 2013, #2924.

  13. Estimated spectrum adaptive postfilter and the iterative prepost filtering algirighms

    NASA Technical Reports Server (NTRS)

    Linares, Irving (Inventor)

    2004-01-01

    The invention presents The Estimated Spectrum Adaptive Postfilter (ESAP) and the Iterative Prepost Filter (IPF) algorithms. These algorithms model a number of image-adaptive post-filtering and pre-post filtering methods. They are designed to minimize Discrete Cosine Transform (DCT) blocking distortion caused when images are highly compressed with the Joint Photographic Expert Group (JPEG) standard. The ESAP and the IPF techniques of the present invention minimize the mean square error (MSE) to improve the objective and subjective quality of low-bit-rate JPEG gray-scale images while simultaneously enhancing perceptual visual quality with respect to baseline JPEG images.

  14. Consistency of internal fluxes in a hydrological model running at multiple time steps

    NASA Astrophysics Data System (ADS)

    Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-04-01

    Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7

  15. Teaching physical activities to students with significant disabilities using video modeling.

    PubMed

    Cannella-Malone, Helen I; Mizrachi, Sharona V; Sabielny, Linsey M; Jimenez, Eliseo D

    2013-06-01

    The objective of this study was to examine the effectiveness of video modeling on teaching physical activities to three adolescents with significant disabilities. The study implemented a multiple baseline across six physical activities (three per student): jumping rope, scooter board with cones, ladder drill (i.e., feet going in and out), ladder design (i.e., multiple steps), shuttle run, and disc ride. Additional prompt procedures (i.e., verbal, gestural, visual cues, and modeling) were implemented within the study. After the students mastered the physical activities, we tested to see if they would link the skills together (i.e., complete an obstacle course). All three students made progress learning the physical activities, but only one learned them with video modeling alone (i.e., without error correction). Video modeling can be an effective tool for teaching students with significant disabilities various physical activities, though additional prompting procedures may be needed.

  16. An approach for real-time fast point positioning of the BeiDou Navigation Satellite System using augmentation information

    NASA Astrophysics Data System (ADS)

    Tu, Rui; Zhang, Rui; Zhang, Pengfei; Liu, Jinhai; Lu, Xiaochun

    2018-07-01

    This study proposes an approach to facilitate real-time fast point positioning of the BeiDou Navigation Satellite System (BDS) based on regional augmentation information. We term this as the precise positioning based on augmentation information (BPP) approach. The coordinates of the reference stations were highly constrained to extract the augmentation information, which contained not only the satellite orbit clock error correlated with the satellite running state, but also included the atmosphere error and unmodeled error, which are correlated with the spatial and temporal states. Based on these mixed augmentation corrections, a precise point positioning (PPP) model could be used for the coordinates estimation of the user stations, and the float ambiguity could be easily fixed for the single-difference between satellites. Thus, this technique provided a quick and high-precision positioning service. Three different datasets with small, medium, and large baselines (0.6 km, 30 km and 136 km) were used to validate the feasibility and effectiveness of the proposed BPP method. The validations showed that using the BPP model, 1–2 cm positioning service can be provided in a 100 km wide area after just 2 s of initialization. Thus, as the proposed approach not only capitalized on both PPP and RTK but also provided consistent application, it can be used for area augmentation positioning.

  17. Multiple Intravenous Infusions Phase 2b: Laboratory Study

    PubMed Central

    Pinkney, Sonia; Fan, Mark; Chan, Katherine; Koczmara, Christine; Colvin, Christopher; Sasangohar, Farzan; Masino, Caterina; Easty, Anthony; Trbovich, Patricia

    2014-01-01

    Background Administering multiple intravenous (IV) infusions to a single patient via infusion pump occurs routinely in health care, but there has been little empirical research examining the risks associated with this practice or ways to mitigate those risks. Objectives To identify the risks associated with multiple IV infusions and assess the impact of interventions on nurses’ ability to safely administer them. Data Sources and Review Methods Forty nurses completed infusion-related tasks in a simulated adult intensive care unit, with and without interventions (i.e., repeated-measures design). Results Errors were observed in completing common tasks associated with the administration of multiple IV infusions, including the following (all values from baseline, which was current practice): setting up and programming multiple primary continuous IV infusions (e.g., 11.7% programming errors) identifying IV infusions (e.g., 7.7% line-tracing errors) managing dead volume (e.g., 96.0% flush rate errors following IV syringe dose administration) setting up a secondary intermittent IV infusion (e.g., 11.3% secondary clamp errors) administering an IV pump bolus (e.g., 11.5% programming errors) Of 10 interventions tested, 6 (1 practice, 3 technology, and 2 educational) significantly decreased or even eliminated errors compared to baseline. Limitations The simulation of an adult intensive care unit at 1 hospital limited the ability to generalize results. The study results were representative of nurses who received training in the interventions but had little experience using them. The longitudinal effects of the interventions were not studied. Conclusions Administering and managing multiple IV infusions is a complex and risk-prone activity. However, when a patient requires multiple IV infusions, targeted interventions can reduce identified risks. A combination of standardized practice, technology improvements, and targeted education is required. PMID:26316919

  18. Normative Values of the Sport Concussion Assessment Tool 3 (SCAT3) in High School Athletes.

    PubMed

    Snedden, Traci R; Brooks, Margaret Alison; Hetzel, Scott; McGuine, Tim

    2017-09-01

    Establish sex, age, and concussion history-specific normative baseline sport concussion assessment tool 3 (SCAT3) values in adolescent athletes. Prospective cohort. Seven Wisconsin high schools. Seven hundred fifty-eight high school athletes participating in 19 sports. Sex, age, and concussion history. Sport Concussion Assessment Tool 3 (SCAT3): total number of symptoms; symptom severity; total Standardized Assessment of Concussion (SAC); and each SAC component (orientation, immediate memory, concentration, delayed recall); Balance Error Scoring System (BESS) total errors (BESS, floor and foam pad). Males reported a higher total number of symptoms [median (interquartile range): 0 (0-2) vs 0 (0-1), P = 0.001] and severity of symptoms [0 (0-3) vs 0 (0-2), P = 0.001] and a lower mean (SD) total SAC [26.0 (2.3) vs 26.4 (2.0), P = 0.026], and orientation [5 (4-5) vs 5 (5-5), P = 0.021]. There was no difference in baseline scores between sex for immediate memory, concentration, delayed recall or BESS total errors. No differences were found for any test domain based on age. Previously, concussed athletes reported a higher total number of symptoms [1 (0-4) vs 0 (0-2), P = 0.001] and symptom severity [2 (0-5) vs 0 (0-2), P = 0.001]. BESS total scores did not differ by concussion history. This study represents the first published normative baseline SCAT3 values in high school athletes. Results varied by sex and history of previous concussion but not by age. The normative baseline values generated from this study will help clinicians better evaluate and interpret SCAT3 results of concussed adolescent athletes.

  19. Disturbance observer based model predictive control for accurate atmospheric entry of spacecraft

    NASA Astrophysics Data System (ADS)

    Wu, Chao; Yang, Jun; Li, Shihua; Li, Qi; Guo, Lei

    2018-05-01

    Facing the complex aerodynamic environment of Mars atmosphere, a composite atmospheric entry trajectory tracking strategy is investigated in this paper. External disturbances, initial states uncertainties and aerodynamic parameters uncertainties are the main problems. The composite strategy is designed to solve these problems and improve the accuracy of Mars atmospheric entry. This strategy includes a model predictive control for optimized trajectory tracking performance, as well as a disturbance observer based feedforward compensation for external disturbances and uncertainties attenuation. 500-run Monte Carlo simulations show that the proposed composite control scheme achieves more precise Mars atmospheric entry (3.8 km parachute deployment point distribution error) than the baseline control scheme (8.4 km) and integral control scheme (5.8 km).

  20. IRIS-S - Extending geodetic very long baseline interferometry observations to the Southern Hemisphere

    NASA Astrophysics Data System (ADS)

    Carter, W. E.; Robertson, D. S.; Nothnagel, A.; Nicolson, G. D.; Schuh, H.

    1988-12-01

    High-accuracy geodetic very long baseline interferometry measurements between the African, Eurasian, and North American plates have been analyzed to determine the terrestrial coordinates of the Hartebeesthoek observatory to better than 10 cm, to determine the celestial coordinates of eight Southern Hemisphere radio sources with milliarc second (mas) accuracy, and to derive quasi-independent polar motion, UTI, and nutation time series. Comparison of the earth orientation time series with ongoing International Radio Interferometric Surveying project values shows agreement at about the 1 mas of arc level in polar motion and nutation and 0.1 ms of time in UTI. Given the independence of the observing sessions and the unlikeliness of common systematic error sources, this level of agreement serves to bound the total errors in both measurement series.

  1. Spatial heterogeneity of type I error for local cluster detection tests

    PubMed Central

    2014-01-01

    Background Just as power, type I error of cluster detection tests (CDTs) should be spatially assessed. Indeed, CDTs’ type I error and power have both a spatial component as CDTs both detect and locate clusters. In the case of type I error, the spatial distribution of wrongly detected clusters (WDCs) can be particularly affected by edge effect. This simulation study aims to describe the spatial distribution of WDCs and to confirm and quantify the presence of edge effect. Methods A simulation of 40 000 datasets has been performed under the null hypothesis of risk homogeneity. The simulation design used realistic parameters from survey data on birth defects, and in particular, two baseline risks. The simulated datasets were analyzed using the Kulldorff’s spatial scan as a commonly used test whose behavior is otherwise well known. To describe the spatial distribution of type I error, we defined the participation rate for each spatial unit of the region. We used this indicator in a new statistical test proposed to confirm, as well as quantify, the edge effect. Results The predefined type I error of 5% was respected for both baseline risks. Results showed strong edge effect in participation rates, with a descending gradient from center to edge, and WDCs more often centrally situated. Conclusions In routine analysis of real data, clusters on the edge of the region should be carefully considered as they rarely occur when there is no cluster. Further work is needed to combine results from power studies with this work in order to optimize CDTs performance. PMID:24885343

  2. Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination Error Analysis

    NASA Technical Reports Server (NTRS)

    Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl

    2009-01-01

    The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited time period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. Absolute and relative position and velocity error hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise error contributi ons over the definitive and predictive arcs and at discrete times inc luding the maneuver planning and execution times. Details of the meth odology, orbital characteristics, maneuver timeline, error models, and error sensitivities are provided.

  3. MobileFusion: real-time volumetric surface reconstruction and dense tracking on mobile phones.

    PubMed

    Ondrúška, Peter; Kohli, Pushmeet; Izadi, Shahram

    2015-11-01

    We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ∼ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.

  4. Using compressive sensing to recover images from PET scanners with partial detector rings.

    PubMed

    Valiollahzadeh, SeyyedMajid; Clark, John W; Mawlawi, Osama

    2015-01-01

    Most positron emission tomography/computed tomography (PET/CT) scanners consist of tightly packed discrete detector rings to improve scanner efficiency. The authors' aim was to use compressive sensing (CS) techniques in PET imaging to investigate the possibility of decreasing the number of detector elements per ring (introducing gaps) while maintaining image quality. A CS model based on a combination of gradient magnitude and wavelet domains (wavelet-TV) was developed to recover missing observations in PET data acquisition. The model was designed to minimize the total variation (TV) and L1-norm of wavelet coefficients while constrained by the partially observed data. The CS model also incorporated a Poisson noise term that modeled the observed noise while suppressing its contribution by penalizing the Poisson log likelihood function. Three experiments were performed to evaluate the proposed CS recovery algorithm: a simulation study, a phantom study, and six patient studies. The simulation dataset comprised six disks of various sizes in a uniform background with an activity concentration of 5:1. The simulated image was multiplied by the system matrix to obtain the corresponding sinogram and then Poisson noise was added. The resultant sinogram was masked to create the effect of partial detector removal and then the proposed CS algorithm was applied to recover the missing PET data. In addition, different levels of noise were simulated to assess the performance of the proposed algorithm. For the phantom study, an IEC phantom with six internal spheres each filled with F-18 at an activity-to-background ratio of 10:1 was used. The phantom was imaged twice on a RX PET/CT scanner: once with all detectors operational (baseline) and once with four detector blocks (11%) turned off at each of 0 ˚, 90 ˚, 180 ˚, and 270° (partially sampled). The partially acquired sinograms were then recovered using the proposed algorithm. For the third test, PET images from six patient studies were investigated using the same strategy of the phantom study. The recovered images using WTV and TV as well as the partially sampled images from all three experiments were then compared with the fully sampled images (the baseline). Comparisons were done by calculating the mean error (%bias), root mean square error (RMSE), contrast recovery (CR), and SNR of activity concentration in regions of interest drawn in the background as well as the disks, spheres, and lesions. For the simulation study, the mean error, RMSE, and CR for the WTV (TV) recovered images were 0.26% (0.48%), 2.6% (2.9%), 97% (96%), respectively, when compared to baseline. For the partially sampled images, these results were 22.5%, 45.9%, and 64%, respectively. For the simulation study, the average SNR for the baseline was 41.7 while for WTV (TV), recovered image was 44.2 (44.0). The phantom study showed similar trends with 5.4% (18.2%), 15.6% (18.8%), and 78% (60%), respectively, for the WTV (TV) images and 33%, 34.3%, and 69% for the partially sampled images. For the phantom study, the average SNR for the baseline was 14.7 while for WTV (TV) recovered image was 13.7 (11.9). Finally, the average of these values for the six patient studies for the WTV-recovered, TV, and partially sampled images was 1%, 7.2%, 92% and 1.3%, 15.1%, 87%, and 27%, 25.8%, 45%, respectively. CS with WTV is capable of recovering PET images with good quantitative accuracy from partially sampled data. Such an approach can be used to potentially reduce the cost of scanners while maintaining good image quality.

  5. Using compressive sensing to recover images from PET scanners with partial detector rings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valiollahzadeh, SeyyedMajid, E-mail: sv4@rice.edu; Clark, John W.; Mawlawi, Osama

    2015-01-15

    Purpose: Most positron emission tomography/computed tomography (PET/CT) scanners consist of tightly packed discrete detector rings to improve scanner efficiency. The authors’ aim was to use compressive sensing (CS) techniques in PET imaging to investigate the possibility of decreasing the number of detector elements per ring (introducing gaps) while maintaining image quality. Methods: A CS model based on a combination of gradient magnitude and wavelet domains (wavelet-TV) was developed to recover missing observations in PET data acquisition. The model was designed to minimize the total variation (TV) and L1-norm of wavelet coefficients while constrained by the partially observed data. The CSmore » model also incorporated a Poisson noise term that modeled the observed noise while suppressing its contribution by penalizing the Poisson log likelihood function. Three experiments were performed to evaluate the proposed CS recovery algorithm: a simulation study, a phantom study, and six patient studies. The simulation dataset comprised six disks of various sizes in a uniform background with an activity concentration of 5:1. The simulated image was multiplied by the system matrix to obtain the corresponding sinogram and then Poisson noise was added. The resultant sinogram was masked to create the effect of partial detector removal and then the proposed CS algorithm was applied to recover the missing PET data. In addition, different levels of noise were simulated to assess the performance of the proposed algorithm. For the phantom study, an IEC phantom with six internal spheres each filled with F-18 at an activity-to-background ratio of 10:1 was used. The phantom was imaged twice on a RX PET/CT scanner: once with all detectors operational (baseline) and once with four detector blocks (11%) turned off at each of 0 °, 90 °, 180 °, and 270° (partially sampled). The partially acquired sinograms were then recovered using the proposed algorithm. For the third test, PET images from six patient studies were investigated using the same strategy of the phantom study. The recovered images using WTV and TV as well as the partially sampled images from all three experiments were then compared with the fully sampled images (the baseline). Comparisons were done by calculating the mean error (%bias), root mean square error (RMSE), contrast recovery (CR), and SNR of activity concentration in regions of interest drawn in the background as well as the disks, spheres, and lesions. Results: For the simulation study, the mean error, RMSE, and CR for the WTV (TV) recovered images were 0.26% (0.48%), 2.6% (2.9%), 97% (96%), respectively, when compared to baseline. For the partially sampled images, these results were 22.5%, 45.9%, and 64%, respectively. For the simulation study, the average SNR for the baseline was 41.7 while for WTV (TV), recovered image was 44.2 (44.0). The phantom study showed similar trends with 5.4% (18.2%), 15.6% (18.8%), and 78% (60%), respectively, for the WTV (TV) images and 33%, 34.3%, and 69% for the partially sampled images. For the phantom study, the average SNR for the baseline was 14.7 while for WTV (TV) recovered image was 13.7 (11.9). Finally, the average of these values for the six patient studies for the WTV-recovered, TV, and partially sampled images was 1%, 7.2%, 92% and 1.3%, 15.1%, 87%, and 27%, 25.8%, 45%, respectively. Conclusions: CS with WTV is capable of recovering PET images with good quantitative accuracy from partially sampled data. Such an approach can be used to potentially reduce the cost of scanners while maintaining good image quality.« less

  6. Model-based meta-analysis for comparing Vitamin D2 and D3 parent-metabolite pharmacokinetics.

    PubMed

    Ocampo-Pelland, Alanna S; Gastonguay, Marc R; Riggs, Matthew M

    2017-08-01

    Association of Vitamin D (D3 & D2) and its 25OHD metabolite (25OHD3 & 25OHD2) exposures with various diseases is an active research area. D3 and D2 dose-equivalency and each form's ability to raise 25OHD concentrations are not well-defined. The current work describes a population pharmacokinetic (PK) model for D2 and 25OHD2 and the use of a previously developed D3-25OHD3 PK model [1] for comparing D3 and D2-related exposures. Public-source D2 and 25OHD2 PK data in healthy or osteoporotic populations, including 17 studies representing 278 individuals (15 individual-level and 18 arm-level units), were selected using search criteria in PUBMED. Data included oral, single and multiple D2 doses (400-100,000 IU/d). Nonlinear mixed effects models were developed simultaneously for D2 and 25OHD2 PK (NONMEM v7.2) by considering 1- and 2-compartment models with linear or nonlinear clearance. Unit-level random effects and residual errors were weighted by arm sample size. Model simulations compared 25OHD exposures, following repeated D2 and D3 oral administration across typical dosing and baseline ranges. D2 parent and metabolite were each described by 2-compartment models with numerous parameter estimates shared with the D3-25OHD3 model [1]. Notably, parent D2 was eliminated (converted to 25OHD) through a first-order clearance whereas the previously published D3 model [1] included a saturable non-linear clearance. Similar to 25OHD3 PK model results [1], 25OHD2 was eliminated by a first-order clearance, which was almost twice as fast as the former. Simulations at lower baselines, following lower equivalent doses, indicated that D3 was more effective than D2 at raising 25OHD concentrations. Due to saturation of D3 clearance, however, at higher doses or baselines, the probability of D2 surpassing D3's ability to raise 25OHD concentrations increased substantially. Since 25OHD concentrations generally surpassed 75 nmol/L at these higher baselines by 3 months, there would be no expected clinical difference in the two forms.

  7. Trauma Quality Improvement: Reducing Triage Errors by Automating the Level Assignment Process.

    PubMed

    Stonko, David P; O Neill, Dillon C; Dennis, Bradley M; Smith, Melissa; Gray, Jeffrey; Guillamondegui, Oscar D

    2018-04-12

    Trauma patients are triaged by the severity of their injury or need for intervention while en route to the trauma center according to trauma activation protocols that are institution specific. Significant research has been aimed at improving these protocols in order to optimize patient outcomes while striving for efficiency in care. However, it is known that patients are often undertriaged or overtriaged because protocol adherence remains imperfect. The goal of this quality improvement (QI) project was to improve this adherence, and thereby reduce the triage error. It was conducted as part of the formal undergraduate medical education curriculum at this institution. A QI team was assembled and baseline data were collected, then 2 Plan-Do-Study-Act (PDSA) cycles were implemented sequentially. During the first cycle, a novel web tool was developed and implemented in order to automate the level assignment process (it takes EMS-provided data and automatically determines the level); the tool was based on the existing trauma activation protocol. The second PDSA cycle focused on improving triage accuracy in isolated, less than 10% total body surface area burns, which we identified to be a point of common error. Traumas were reviewed and tabulated at the end of each PDSA cycle, and triage accuracy was followed with a run chart. This study was performed at Vanderbilt University Medical Center and Medical School, which has a large level 1 trauma center covering over 75,000 square miles, and which sees urban, suburban, and rural trauma. The baseline assessment period and each PDSA cycle lasted 2 weeks. During this time, all activated, adult, direct traumas were reviewed. There were 180 patients during the baseline period, 189 after the first test of change, and 150 after the second test of change. All were included in analysis. Of 180 patients, 30 were inappropriately triaged during baseline analysis (3 undertriaged and 27 overtriaged) versus 16 of 189 (3 undertriaged and 13 overtriaged) following implementation of the web tool (p = 0.017 for combined errors). Overtriage dropped further from baseline to 10/150 after the second test of change (p = 0.005). The total number of triaged patients dropped from 92.3/week to 75.5/week after the second test of change. There was no statistically significant change in the undertriage rate. The combination of web tool implementation and protocol refinement decreased the combined triage error rate by over 50% (from 16.7%-7.9%). We developed and tested a web tool that improved triage accuracy, and provided a sustainable method to enact future quality improvement. This web tool and QI framework would be easily expandable to other hospitals. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  8. Action Research of an Error Self-Correction Intervention: Examining the Effects on the Spelling Accuracy Behaviors of Fifth-Grade Students Identified as At-Risk

    ERIC Educational Resources Information Center

    Turner, Jill; Rafferty, Lisa A.; Sullivan, Ray; Blake, Amy

    2017-01-01

    In this action research case study, the researchers used a multiple baseline across two student pairs design to investigate the effects of the error self-correction method on the spelling accuracy behaviors for four fifth-grade students who were identified as being at risk for learning disabilities. The dependent variable was the participants'…

  9. Effects of Topical Latanoprost on Intraocular Pressure and Myopia Progression in Young Guinea Pigs

    PubMed Central

    El-Nimri, Nevin W.; Wildsoet, Christine F.

    2018-01-01

    Purpose To determine whether latanoprost, a prostaglandin analog proven to be very effective in reducing intraocular pressure (IOP) in humans, can also slow myopia progression in the guinea pig form deprivation (FD) model. Methods Two-week-old pigmented guinea pigs underwent monocular FD and daily topical latanoprost (0.005%, n = 10) or artificial tears (control, n = 10) starting 1 week after the initiation of FD, with all treatments continuing for a further 9 weeks. Tonometry, retinoscopy, and high-frequency A-scan ultrasonography were used to monitor IOP, refractive error, and ocular axial dimensions, respectively. Results Latanoprost significantly reduced IOP and slowed myopia progression. Mean interocular IOP differences (±SEM) recorded at baseline and week 10 were −0.30 ± 0.51 and 1.80 ± 1.16 mm Hg (P = 0.525) for the control group and 0.07 ± 0.35 and −5.17 ± 0.96 mm Hg (P < 0.001) for the latanoprost group. Equivalent interocular differences for optical axial length at baseline and week 10 were 0.00 ± 0.015 and 0.29 ± 0.04 mm (P < 0.001; control) and 0.02 ± 0.02 and 0.06 ± 0.02 mm (P = 0.202; latanoprost), and for refractive error were +0.025 ± 0.36 and −8.2 ± 0.71 diopter (D) (P < 0.001; control), and −0.15 ± 0.35 and −2.25 ± 0.54 D (P = 0.03; latanoprost). Conclusions In the FD guinea pig model, latanoprost significantly reduces the development of myopia. Although further investigations into underlying mechanisms are needed, the results open the exciting possibility of a new line of myopia control therapy. PMID:29847673

  10. Technology research for strapdown inertial experiment and digital flight control and guidance

    NASA Technical Reports Server (NTRS)

    Carestia, R. A.; Cottrell, D. E.

    1985-01-01

    A helicopter flight-test program to evaluate the performance of Honeywell's Tetrad - a strapdown, laser gyro, inertial navitation system is discussed. The results of 34 flights showed a mean final navigational velocity error of 5.06 knots, with a standard deviation of 3.84 knots; a corresponding mean final position error of 2.66 n.mi., with a standard deviation of 1.48 n.m.; and a modeled mean-position-error growth rate for the 34 tests of 1.96 knots, with a standard deviation of 1.09 knots. Tetrad's four-ring laser gyros provided reliable and accurate angular rate sensing during the test program and on sensor failures were detected during the evaluation. Criteria suitable for investigating cockpit systems in rotorcraft were developed. This criteria led to the development of two basic simulators. The first was a standard simulator which could be used to obtain baseline information for studying pilot workload and interactions. The second was an advanced simulator which integrated the RODAAS developed by Honeywell into this simulator. The second area also included surveying the aerospace industry to determine the level of use and impact of microcomputers and related components on avionics systems.

  11. Predicting functional decline and survival in amyotrophic lateral sclerosis.

    PubMed

    Ong, Mei-Lyn; Tan, Pei Fang; Holbrook, Joanna D

    2017-01-01

    Better predictors of amyotrophic lateral sclerosis disease course could enable smaller and more targeted clinical trials. Partially to address this aim, the Prize for Life foundation collected de-identified records from amyotrophic lateral sclerosis sufferers who participated in clinical trials of investigational drugs and made them available to researchers in the PRO-ACT database. In this study, time series data from PRO-ACT subjects were fitted to exponential models. Binary classes for decline in the total score of amyotrophic lateral sclerosis functional rating scale revised (ALSFRS-R) (fast/slow progression) and survival (high/low death risk) were derived. Data was segregated into training and test sets via cross validation. Learning algorithms were applied to the demographic, clinical and laboratory parameters in the training set to predict ALSFRS-R decline and the derived fast/slow progression and high/low death risk categories. The performance of predictive models was assessed by cross-validation in the test set using Receiver Operator Curves and root mean squared errors. A model created using a boosting algorithm containing the decline in four parameters (weight, alkaline phosphatase, albumin and creatine kinase) post baseline, was able to predict functional decline class (fast or slow) with fair accuracy (AUC = 0.82). However similar approaches to build a predictive model for decline class by baseline subject characteristics were not successful. In contrast, baseline values of total bilirubin, gamma glutamyltransferase, urine specific gravity and ALSFRS-R item score-climbing stairs were sufficient to predict survival class. Using combinations of small numbers of variables it was possible to predict classes of functional decline and survival across the 1-2 year timeframe available in PRO-ACT. These findings may have utility for design of future ALS clinical trials.

  12. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    PubMed

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  13. The effect of tracking network configuration on GPS baseline estimates for the CASA Uno experiment

    NASA Technical Reports Server (NTRS)

    Wolf, S. Kornreich; Dixon, T. H.; Freymueller, J. T.

    1990-01-01

    The effect of the tracking network on long (greater than 100 km) GPS baseline estimates was estimated using various subsets of the global tracking network initiated by the first Central and South America (CASA Uno) experiment. It was found that best results could be obtained with a global tacking network consisting of three U.S. stations, two sites in the southwestern Pacific, and two sites in Europe. In comparison with smaller subsets, this global network improved the baseline repeatability, the resolution of carrier phase cycle ambiguities, and formal errors of the orbit estimates.

  14. Flexible, multi-measurement guided wave damage detection under varying temperatures

    NASA Astrophysics Data System (ADS)

    Douglass, Alexander C. S.; Harley, Joel B.

    2018-04-01

    Temperature compensation in structural health monitoring helps identify damage in a structure by removing data variations due to environmental conditions, such as temperature. Stretch-based methods are one of the most commonly used temperature compensation methods. To account for variations in temperature, stretch-based methods optimally stretch signals in time to optimally match a measurement to a baseline. All of the data is then compared with the single baseline to determine the presence of damage. Yet, for these methods to be effective, the measurement and the baseline must satisfy the inherent assumptions of the temperature compensation method. In many scenarios, these assumptions are wrong, the methods generate error, and damage detection fails. To improve damage detection, a multi-measurement damage detection method is introduced. By using each measurement in the dataset as a baseline, error caused by imperfect temperature compensation is reduced. The multi-measurement method increases the detection effectiveness of our damage metric, or damage indicator, over time and reduces the presence of additional peaks caused by temperature that could be mistaken for damage. By using many baselines, the variance of the damage indicator is reduced and the effects from damage are amplified. Notably, the multi-measurement improves damage detection over single-measurement methods. This is demonstrated through an increase in the maximum of our damage signature from 0.55 to 0.95 (where large values, up to a maximum of one, represent a statistically significant change in the data due to damage).

  15. Comparison of Interferometric Time-Series Analysis Techniques with Implications for Future Mission Design

    NASA Astrophysics Data System (ADS)

    Werner, C. L.; Wegmuller, U.; Strozzi, T.; Wiesmann, A.

    2006-12-01

    Principle contributors to the noise in differential SAR interferograms are temporal phase stability of the surface, geometry relating to baseline and surface slope, and propagation path delay variations due to tropospheric water vapor and the ionosphere. Time series analysis of multiple interferograms generated from a stack of SAR SLC images seeks to determine the deformation history of the surface while reducing errors. Only those scatterers within a resolution element that are stable and coherent for each interferometric pair contribute to the desired deformation signal. Interferograms with baselines exceeding 1/3 the critical baseline have substantial geometrical decorrelation for distributed targets. Short baseline pairs with multiple reference scenes can be combined using least-squares estimation to obtain a global deformation solution. Alternately point-like persistent scatterers can be identified in scenes that do not exhibit geometrical decorrelation associated with large baselines. In this approach interferograms are formed from a stack of SAR complex images using a single reference scene. Stable distributed scatter pixels are excluded however due to the presence of large baselines. We apply both point- based and short-baseline methodologies and compare results for a stack of fine-beam Radarsat data acquired in 2002-2004 over a rapidly subsiding oil field near Lost Hills, CA. We also investigate the density of point-like scatters with respect to image resolution. The primary difficulty encountered when applying time series methods is phase unwrapping errors due to spatial and temporal gaps. Phase unwrapping requires sufficient spatial and temporal sampling. Increasing the SAR range bandwidth increases the range resolution as well as increasing the critical interferometric baseline that defines the required satellite orbital tube diameter. Sufficient spatial sampling also permits unwrapping because of the reduced phase/pixel gradient. Short time intervals further reduce the differential phase due to deformation when the deformation is continuous. Lower frequency systems (L- vs. C-Band) substantially improve the ability to unwrap the phase correctly by directly reducing both interferometric phase amplitude and temporal decorrelation.

  16. Likelihood-Based Random-Effect Meta-Analysis of Binary Events.

    PubMed

    Amatya, Anup; Bhaumik, Dulal K; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D

    2015-01-01

    Meta-analysis has been used extensively for evaluation of efficacy and safety of medical interventions. Its advantages and utilities are well known. However, recent studies have raised questions about the accuracy of the commonly used moment-based meta-analytic methods in general and for rare binary outcomes in particular. The issue is further complicated for studies with heterogeneous effect sizes. Likelihood-based mixed-effects modeling provides an alternative to moment-based methods such as inverse-variance weighted fixed- and random-effects estimators. In this article, we compare and contrast different mixed-effect modeling strategies in the context of meta-analysis. Their performance in estimation and testing of overall effect and heterogeneity are evaluated when combining results from studies with a binary outcome. Models that allow heterogeneity in both baseline rate and treatment effect across studies have low type I and type II error rates, and their estimates are the least biased among the models considered.

  17. A Continental Rifting Event in Tanzania Revealed by Envisat and ALOS InSAR Observations

    NASA Astrophysics Data System (ADS)

    Oyen, A. M.; Marinkovic, P. S.; Wauthier, C.; d'Oreye, N.; Hanssen, R. F.

    2008-11-01

    From July to September 2007 a series of moderate earthquakes struck the area South of the Gelai volcano, located on the Eastern branch of the East African Rift (North Tanzania). Most deformation patterns detected by InSAR in these period are very complex, impeding proper interpretation. To decrease the complexity of the models of the deformation, this study proposes two strategies of combining data from different tracks and sensors. In a first stage a method is proposed to correct unwrapping errors in C-band using the much more coherent L-band data. Furthermore, a modeling optimization method is explored, which aims at the decomposition of the deformation in smaller temporal baselines, by means of creating new, artificial interferograms and the use of models. Due to the higher coherence level and fewer phase cycles in L-band, the deformation interpretation is facilitated but model residual interpretation has become more difficult compared to C-band.

  18. Automated Detection of Diabetic Retinopathy using Deep Learning.

    PubMed

    Lam, Carson; Yi, Darvin; Guo, Margaret; Lindsey, Tony

    2018-01-01

    Diabetic retinopathy is a leading cause of blindness among working-age adults. Early detection of this condition is critical for good prognosis. In this paper, we demonstrate the use of convolutional neural networks (CNNs) on color fundus images for the recognition task of diabetic retinopathy staging. Our network models achieved test metric performance comparable to baseline literature results, with validation sensitivity of 95%. We additionally explored multinomial classification models, and demonstrate that errors primarily occur in the misclassification of mild disease as normal due to the CNNs inability to detect subtle disease features. We discovered that preprocessing with contrast limited adaptive histogram equalization and ensuring dataset fidelity by expert verification of class labels improves recognition of subtle features. Transfer learning on pretrained GoogLeNet and AlexNet models from ImageNet improved peak test set accuracies to 74.5%, 68.8%, and 57.2% on 2-ary, 3-ary, and 4-ary classification models, respectively.

  19. Discovering body site and severity modifiers in clinical texts

    PubMed Central

    Dligach, Dmitriy; Bethard, Steven; Becker, Lee; Miller, Timothy; Savova, Guergana K

    2014-01-01

    Objective To research computational methods for discovering body site and severity modifiers in clinical texts. Methods We cast the task of discovering body site and severity modifiers as a relation extraction problem in the context of a supervised machine learning framework. We utilize rich linguistic features to represent the pairs of relation arguments and delegate the decision about the nature of the relationship between them to a support vector machine model. We evaluate our models using two corpora that annotate body site and severity modifiers. We also compare the model performance to a number of rule-based baselines. We conduct cross-domain portability experiments. In addition, we carry out feature ablation experiments to determine the contribution of various feature groups. Finally, we perform error analysis and report the sources of errors. Results The performance of our method for discovering body site modifiers achieves F1 of 0.740–0.908 and our method for discovering severity modifiers achieves F1 of 0.905–0.929. Discussion Results indicate that both methods perform well on both in-domain and out-domain data, approaching the performance of human annotators. The most salient features are token and named entity features, although syntactic dependency features also contribute to the overall performance. The dominant sources of errors are infrequent patterns in the data and inability of the system to discern deeper semantic structures. Conclusions We investigated computational methods for discovering body site and severity modifiers in clinical texts. Our best system is released open source as part of the clinical Text Analysis and Knowledge Extraction System (cTAKES). PMID:24091648

  20. Discovering body site and severity modifiers in clinical texts.

    PubMed

    Dligach, Dmitriy; Bethard, Steven; Becker, Lee; Miller, Timothy; Savova, Guergana K

    2014-01-01

    To research computational methods for discovering body site and severity modifiers in clinical texts. We cast the task of discovering body site and severity modifiers as a relation extraction problem in the context of a supervised machine learning framework. We utilize rich linguistic features to represent the pairs of relation arguments and delegate the decision about the nature of the relationship between them to a support vector machine model. We evaluate our models using two corpora that annotate body site and severity modifiers. We also compare the model performance to a number of rule-based baselines. We conduct cross-domain portability experiments. In addition, we carry out feature ablation experiments to determine the contribution of various feature groups. Finally, we perform error analysis and report the sources of errors. The performance of our method for discovering body site modifiers achieves F1 of 0.740-0.908 and our method for discovering severity modifiers achieves F1 of 0.905-0.929. Results indicate that both methods perform well on both in-domain and out-domain data, approaching the performance of human annotators. The most salient features are token and named entity features, although syntactic dependency features also contribute to the overall performance. The dominant sources of errors are infrequent patterns in the data and inability of the system to discern deeper semantic structures. We investigated computational methods for discovering body site and severity modifiers in clinical texts. Our best system is released open source as part of the clinical Text Analysis and Knowledge Extraction System (cTAKES).

  1. Multi-temporal InSAR analysis to reduce uncertainties and assess time-dependence of deformation in the northern Chilean forearc

    NASA Astrophysics Data System (ADS)

    Manjunath, D.; Gomez, F.; Loveless, J.

    2005-12-01

    Interferometric Synthetic Aperture Radar (InSAR) provides unprecedented spatial imaging of crustal deformation. However, for small deformations, such as those due to interseismic strain accumulation, potentially significant uncertainty may result from other sources of interferometric phase, such as atmospheric effects, errors in satellite baseline, and height errors in the reference digital elevation model (DEM). We aim to constrain spatial and temporal variations in crustal deformation of the northern Chilean forearc region of the Andean subduction zone (19° - 22°S) using multiple interferograms spanning 1995 - 2000. The study area includes the region of the 1995 Mw 8.1 Antofagasta earthquake and the region to the north. In contrast to previous InSAR-based studies of the Chilean forearc, we seek to distinguish interferometric phase contributions from linear and nonlinear deformation, height errors in the DEM, and atmospheric effects. Understanding these phase contributions reduces the uncertainties on the deformation rates and provides a view of the time-dependence of deformation. The inteferograms cover a 150 km-wide swath spanning two adjacent orbital tracks. Our study involves the analysis of more than 28 inteferograms along each track. Coherent interferograms in the hyper-arid Atacama Desert permit spatial phase unwrapping. Initial estimates of topographic phase were determined using 3'' DEM data from the SRTM mission. We perform a pixel-by-pixel analysis of the unwrapped phase to identify time- and baseline-dependent phase contributions, using the Gamma Remote Sensing radar software. Atmospheric phase, non-linear deformation, and phase noise were further distinguished using a combination of spatial and temporal filters. Non-linear deformation is evident for up to 2.5 years following the 1995 earthquake, followed by a return to time-linear, interseismic strain accumulation. The regional trend of linear deformation, characterized by coastal subsidence and relative uplift inland, is consistent with the displacement field expected for a locked subduction zone. Our improved determination of deformation rates is used to formulate a new elastic model of interseismic strain in the Chilean forearc.

  2. E-learning optimization: the relative and combined effects of mental practice and modeling on enhanced podcast-based learning-a randomized controlled trial.

    PubMed

    Alam, Fahad; Boet, Sylvain; Piquette, Dominique; Lai, Anita; Perkes, Christopher P; LeBlanc, Vicki R

    2016-10-01

    Enhanced podcasts increase learning, but evidence is lacking on how they should be designed to optimize their effectiveness. This study assessed the impact two learning instructional design methods (mental practice and modeling), either on their own or in combination, for teaching complex cognitive medical content when incorporated into enhanced podcasts. Sixty-three medical students were randomised to one of four versions of an airway management enhanced podcast: (1) control: narrated presentation; (2) modeling: narration with video demonstration of skills; (3) mental practice: narrated presentation with guided mental practice; (4) combined: modeling and mental practice. One week later, students managed a manikin-based simulated airway crisis. Knowledge acquisition was assessed by baseline and retention multiple-choice quizzes. Two blinded raters assessed all videos obtained from simulated crises to measure the students' skills using a key-elements scale, critical error checklist, and the Ottawa global rating scale (GRS). Baseline knowledge was not different between all four groups (p = 0.65). One week later, knowledge retention was significantly higher for (1) both the mental practice and modeling group than the control group (p = 0.01; p = 0.01, respectively) and (2) the combined mental practice and modeling group compared to all other groups (all ps = 0.01). Regarding skills acquisition, the control group significantly under-performed in comparison to all other groups on the key-events scale (all ps ≤ 0.05), the critical error checklist (all ps ≤ 0.05), and the Ottawa GRS (all ps ≤ 0.05). The combination of mental practice and modeling led to greater improvement on the key events checklist (p = 0.01) compared to either strategy alone. However, the combination of the two strategies did not result in any further learning gains on the two other measures of clinical performance (all ps > 0.05). The effectiveness of enhanced podcasts for knowledge retention and clinical skill acquisition is increased with either mental practice or modeling. The combination of mental practice and modeling had synergistic effects on knowledge retention, but conveyed less clear advantages in its application through clinical skills.

  3. Learning to REDUCE: A Reduced Electricity Consumption Prediction Ensemble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aman, Saima; Chelmis, Charalampos; Prasanna, Viktor

    Utilities use Demand Response (DR) to balance supply and demand in the electric grid by involving customers in efforts to reduce electricity consumption during peak periods. To implement and adapt DR under dynamically changing conditions of the grid, reliable prediction of reduced consumption is critical. However, despite the wealth of research on electricity consumption prediction and DR being long in practice, the problem of reduced consumption prediction remains largely un-addressed. In this paper, we identify unique computational challenges associated with the prediction of reduced consumption and contrast this to that of normal consumption and DR baseline prediction.We propose a novelmore » ensemble model that leverages different sequences of daily electricity consumption on DR event days as well as contextual attributes for reduced consumption prediction. We demonstrate the success of our model on a large, real-world, high resolution dataset from a university microgrid comprising of over 950 DR events across a diverse set of 32 buildings. Our model achieves an average error of 13.5%, an 8.8% improvement over the baseline. Our work is particularly relevant for buildings where electricity consumption is not tied to strict schedules. Our results and insights should prove useful to the researchers and practitioners working in the sustainable energy domain.« less

  4. Adaptive Flight Control Design with Optimal Control Modification on an F-18 Aircraft Model

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Nguyen, Nhan T.; Griffin, Brian J.

    2010-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to as the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly; however, a large adaptive gain can lead to high-frequency oscillations which can adversely affect the robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient robustness. A damping term (v) is added in the modification to increase damping as needed. Simulations were conducted on a damaged F-18 aircraft (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) with both the standard baseline dynamic inversion controller and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model.

  5. MRMAide: a mixed resolution modeling aide

    NASA Astrophysics Data System (ADS)

    Treshansky, Allyn; McGraw, Robert M.

    2002-07-01

    The Mixed Resolution Modeling Aide (MRMAide) technology is an effort to semi-automate the implementation of Mixed Resolution Modeling (MRM). MRMAide suggests ways of resolving differences in fidelity and resolution across diverse modeling paradigms. The goal of MRMAide is to provide a technology that will allow developers to incorporate model components into scenarios other than those for which they were designed. Currently, MRM is implemented by hand. This is a tedious, error-prone, and non-portable process. MRMAide, in contrast, will automatically suggest to a developer where and how to connect different components and/or simulations. MRMAide has three phases of operation: pre-processing, data abstraction, and validation. During pre-processing the components to be linked together are evaluated in order to identify appropriate mapping points. During data abstraction those mapping points are linked via data abstraction algorithms. During validation developers receive feedback regarding their newly created models relative to existing baselined models. The current work presents an overview of the various problems encountered during MRM and the various technologies utilized by MRMAide to overcome those problems.

  6. It's Only a Phase: Applying the 5 Phases of Clinical Trials to the NSCR Model Improvement Process

    NASA Technical Reports Server (NTRS)

    Elgart, S. R.; Milder, C. M.; Chappell, L. J.; Semones, E. J.

    2017-01-01

    NASA limits astronaut radiation exposures to a 3% risk of exposure-induced death from cancer (REID) at the upper 95% confidence level. Since astronauts approach this limit, it is important that the estimate of REID be as accurate as possible. The NASA Space Cancer Risk 2012 (NSCR-2012) model has been the standard for NASA's space radiation protection guidelines since its publication in 2013. The model incorporates elements from U.S. baseline statistics, Japanese atomic bomb survivor research, animal models, cellular studies, and radiation transport to calculate astronaut baseline risk of cancer and REID. The NSCR model is under constant revision to ensure emerging research is incorporated into radiation protection standards. It is important to develop guidelines, however, to determine what new research is appropriate for integration. Certain standards of transparency are necessary in order to assess data quality, statistical quality, and analytical quality. To this effect, all original source code and any raw data used to develop the code are required to confirm there are no errors which significantly change reported outcomes. It is possible to apply a clinical trials approach to select and assess the improvement concepts that will be incorporated into future iterations of NSCR. This poster describes the five phases of clinical trials research, pre-clinical research, and clinical research phases I-IV, explaining how each step can be translated into an appropriate NSCR model selection guideline.

  7. Effects of repeated walking in a perturbing environment: a 4-day locomotor learning study.

    PubMed

    Blanchette, Andreanne; Moffet, Helene; Roy, Jean-Sébastien; Bouyer, Laurent J

    2012-07-01

    Previous studies have shown that when subjects repeatedly walk in a perturbing environment, initial movement error becomes smaller, suggesting that retention of the adapted locomotor program occurred (learning). It has been proposed that the newly learned locomotor program may be stored separately from the baseline program. However, how locomotor performance evolves with repeated sessions of walking with the perturbation is not yet known. To address this question, 10 healthy subjects walked on a treadmill on 4 consecutive days. Each day, locomotor performance was measured using kinematics and surface electromyography (EMGs), before, during, and after exposure to a perturbation, produced by an elastic tubing that pulled the foot forward and up during swing, inducing a foot velocity error in the first strides. Initial movement error decreased significantly between days 1 and 2 and then remained stable. Associated changes in medial hamstring EMG activity stabilized only on day 3, however. Aftereffects were present after perturbation removal, suggesting that daily adaptation involved central command recalibration of the baseline program. Aftereffects gradually decreased across days but were still visible on day 4. Separation between the newly learned and baseline programs may take longer than suggested by the daily improvement in initial performance in the perturbing environment or may never be complete. These results therefore suggest that reaching optimal performance in a perturbing environment should not be used as the main indicator of a completed learning process, as central reorganization of the motor commands continues days after initial performance has stabilized.

  8. A data driven partial ambiguity resolution: Two step success rate criterion, and its simulation demonstration

    NASA Astrophysics Data System (ADS)

    Hou, Yanqing; Verhagen, Sandra; Wu, Jie

    2016-12-01

    Ambiguity Resolution (AR) is a key technique in GNSS precise positioning. In case of weak models (i.e., low precision of data), however, the success rate of AR may be low, which may consequently introduce large errors to the baseline solution in cases of wrong fixing. Partial Ambiguity Resolution (PAR) is therefore proposed such that the baseline precision can be improved by fixing only a subset of ambiguities with high success rate. This contribution proposes a new PAR strategy, allowing to select the subset such that the expected precision gain is maximized among a set of pre-selected subsets, while at the same time the failure rate is controlled. These pre-selected subsets are supposed to obtain the highest success rate among those with the same subset size. The strategy is called Two-step Success Rate Criterion (TSRC) as it will first try to fix a relatively large subset with the fixed failure rate ratio test (FFRT) to decide on acceptance or rejection. In case of rejection, a smaller subset will be fixed and validated by the ratio test so as to fulfill the overall failure rate criterion. It is shown how the method can be practically used, without introducing a large additional computation effort. And more importantly, how it can improve (or at least not deteriorate) the availability in terms of baseline precision comparing to classical Success Rate Criterion (SRC) PAR strategy, based on a simulation validation. In the simulation validation, significant improvements are obtained for single-GNSS on short baselines with dual-frequency observations. For dual-constellation GNSS, the improvement for single-frequency observations on short baselines is very significant, on average 68%. For the medium- to long baselines, with dual-constellation GNSS the average improvement is around 20-30%.

  9. SU-E-T-144: Effective Analysis of VMAT QA Generated Trajectory Log Files for Medical Accelerator Predictive Maintenance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Able, CM; Baydush, AH; Nguyen, C

    Purpose: To determine the effectiveness of SPC analysis for a model predictive maintenance process that uses accelerator generated parameter and performance data contained in trajectory log files. Methods: Each trajectory file is decoded and a total of 131 axes positions are recorded (collimator jaw position, gantry angle, each MLC, etc.). This raw data is processed and either axis positions are extracted at critical points during the delivery or positional change over time is used to determine axis velocity. The focus of our analysis is the accuracy, reproducibility and fidelity of each axis. A reference positional trace of the gantry andmore » each MLC is used as a motion baseline for cross correlation (CC) analysis. A total of 494 parameters (482 MLC related) were analyzed using Individual and Moving Range (I/MR) charts. The chart limits were calculated using a hybrid technique that included the use of the standard 3σ limits and parameter/system specifications. Synthetic errors/changes were introduced to determine the initial effectiveness of I/MR charts in detecting relevant changes in operating parameters. The magnitude of the synthetic errors/changes was based on: TG-142 and published analysis of VMAT delivery accuracy. Results: All errors introduced were detected. Synthetic positional errors of 2mm for collimator jaw and MLC carriage exceeded the chart limits. Gantry speed and each MLC speed are analyzed at two different points in the delivery. Simulated Gantry speed error (0.2 deg/sec) and MLC speed error (0.1 cm/sec) exceeded the speed chart limits. Gantry position error of 0.2 deg was detected by the CC maximum value charts. The MLC position error of 0.1 cm was detected by the CC maximum value location charts for every MLC. Conclusion: SPC I/MR evaluation of trajectory log file parameters may be effective in providing an early warning of performance degradation or component failure for medical accelerator systems.« less

  10. Prospects for UT1 Measurements from VLBI Intensive Sessions

    NASA Technical Reports Server (NTRS)

    Boehm, Johannes; Nilsson, Tobias; Schuh, Harald

    2010-01-01

    Very Long Baseline Interferometry (VLBI) Intensives are one-hour single baseline sessions to provide Universal Time (UT1) in near real-time up to a delay of three days if a site is not e-transferring the observational data. Due to the importance of UT1 estimates for the prediction of Earth orientation parameters, as well as any kind of navigation on Earth or in space, there is not only the need to improve the timeliness of the results but also their accuracy. We identify the asymmetry of the tropospheric delays as the major error source, and we provide two strategies to improve the results, in particular of those Intensives which include the station Tsukuba in Japan with its large tropospheric variation. We find an improvement when (1) using ray-traced delays from a numerical weather model, and (2) when estimating tropospheric gradients within the analysis of Intensive sessions. The improvement is shown in terms of reduction of rms of length-of-day estimates w.r.t. those derived from Global Positioning System observations

  11. Small-scale loess landslide monitoring with small baseline subsets interferometric synthetic aperture radar technique-case study of Xingyuan landslide, Shaanxi, China

    NASA Astrophysics Data System (ADS)

    Zhao, Chaoying; Zhang, Qin; He, Yang; Peng, Jianbing; Yang, Chengsheng; Kang, Ya

    2016-04-01

    Small baseline subsets interferometric synthetic aperture radar technique is analyzed to detect and monitor the loess landslide in the southern bank of the Jinghe River, Shaanxi province, China. Aiming to achieve the accurate preslide time-series deformation results over small spatial scale and abrupt temporal deformation loess landslide, digital elevation model error, coherence threshold for phase unwrapping, and quality of unwrapping interferograms must be carefully checked in advance. In this experience, land subsidence accompanying a landslide with the distance <1 km is obtained, which gives a sound precursor for small-scale loess landslide detection. Moreover, the longer and continuous land subsidence has been monitored while deformation starting point for the landslide is successfully inverted, which is key to monitoring the similar loess landslide. In addition, the accelerated landslide deformation from one to two months before the landslide can provide a critical clue to early warning of this kind of landslide.

  12. Baseline Design and Performance Analysis of Laser Altimeter for Korean Lunar Orbiter

    NASA Astrophysics Data System (ADS)

    Lim, Hyung-Chul; Neumann, Gregory A.; Choi, Myeong-Hwan; Yu, Sung-Yeol; Bang, Seong-Cheol; Ka, Neung-Hyun; Park, Jong-Uk; Choi, Man-Soo; Park, Eunseo

    2016-09-01

    Korea’s lunar exploration project includes the launching of an orbiter, a lander (including a rover), and an experimental orbiter (referred to as a lunar pathfinder). Laser altimeters have played an important scientific role in lunar, planetary, and asteroid exploration missions since their first use in 1971 onboard the Apollo 15 mission to the Moon. In this study, a laser altimeter was proposed as a scientific instrument for the Korean lunar orbiter, which will be launched by 2020, to study the global topography of the surface of the Moon and its gravitational field and to support other payloads such as a terrain mapping camera or spectral imager. This study presents the baseline design and performance model for the proposed laser altimeter. Additionally, the study discusses the expected performance based on numerical simulation results. The simulation results indicate that the design of system parameters satisfies performance requirements with respect to detection probability and range error even under unfavorable conditions.

  13. A generalized least squares regression approach for computing effect sizes in single-case research: application examples.

    PubMed

    Maggin, Daniel M; Swaminathan, Hariharan; Rogers, Helen J; O'Keeffe, Breda V; Sugai, George; Horner, Robert H

    2011-06-01

    A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of treatment effect from baseline to treatment phases in standard deviation units. In this paper, the method is applied to two published examples using common single case designs (i.e., withdrawal and multiple-baseline). The results from these studies are described, and the method is compared to ten desirable criteria for single-case effect sizes. Based on the results of this application, we conclude with observations about the use of GLS as a support to visual analysis, provide recommendations for future research, and describe implications for practice. Copyright © 2011 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  14. Project FIT: A School, Community and Social Marketing Intervention Improves Healthy Eating Among Low-Income Elementary School Children.

    PubMed

    Alaimo, Katherine; Carlson, Joseph J; Pfeiffer, Karin A; Eisenmann, Joey C; Paek, Hye-Jin; Betz, Heather H; Thompson, Tracy; Wen, Yalu; Norman, Gregory J

    2015-08-01

    Project FIT was a two-year multi-component nutrition and physical activity intervention delivered in ethnically-diverse low-income elementary schools in Grand Rapids, MI. This paper reports effects on children's nutrition outcomes and process evaluation of the school component. A quasi-experimental design was utilized. 3rd, 4th and 5th-grade students (Yr 1 baseline: N = 410; Yr 2 baseline: N = 405; age range: 7.5-12.6 years) were measured in the fall and spring over the two-year intervention. Ordinal logistic, mixed effect models and generalized estimating equations were fitted, and the robust standard errors were utilized. Primary outcomes favoring the intervention students were found regarding consumption of fruits, vegetables and whole grain bread during year 2. Process evaluation revealed that implementation of most intervention components increased during year 2. Project FIT resulted in small but beneficial effects on consumption of fruits, vegetables, and whole grain bread in ethnically diverse low-income elementary school children.

  15. Cognitive imitation in 2-year-old children (Homo sapiens): a comparison with rhesus monkeys (Macaca mulatta).

    PubMed

    Subiaul, Francys; Romansky, Kathryn; Cantlon, Jessica F; Klein, Tovah; Terrace, Herbert

    2007-10-01

    Here we compare the performance of 2-year-old human children with that of adult rhesus macaques on a cognitive imitation task. The task was to respond, in a particular order, to arbitrary sets of photographs that were presented simultaneously on a touch sensitive video monitor. Because the spatial position of list items was varied from trial to trial, subjects could not learn this task as a series of specific motor responses. On some lists, subjects with no knowledge of the ordinal position of the items were given the opportunity to learn the order of those items by observing an expert model. Children, like monkeys, learned new lists more rapidly in a social condition where they had the opportunity to observe an experienced model perform the list in question, than under a baseline condition in which they had to learn new lists entirely by trial and error. No differences were observed between the accuracy of each species' responses to individual items or in the frequencies with which they made different types of errors. These results provide clear evidence that monkeys and humans share the ability to imitate novel cognitive rules (cognitive imitation).

  16. Measurement equivalence of seven selected items of posttraumatic growth between black and white adult survivors of Hurricane Katrina.

    PubMed

    Rhodes, Alison M; Tran, Thanh V

    2013-02-01

    This study examined the equivalence or comparability of the measurement properties of seven selected items measuring posttraumatic growth among self-identified Black (n = 270) and White (n = 707) adult survivors of Hurricane Katrina, using data from the Baseline Survey of the Hurricane Katrina Community Advisory Group Study. Internal consistency reliability was equally good for both groups (Cronbach's alphas = .79), as were correlations between individual scale items and their respective overall scale. Confirmatory factor analysis of a congeneric measurement model of seven selected items of posttraumatic growth showed adequate measures of fit for both groups. The results showed only small variation in magnitude of factor loadings and measurement errors between the two samples. Tests of measurement invariance showed mixed results, but overall indicated that factor loading, error variance, and factor variance were similar between the two samples. These seven selected items can be useful for future large-scale surveys of posttraumatic growth.

  17. A direct evaluation of the Geosat altimeter wet atmospheric range delay using very long baseline interferometry observations

    NASA Technical Reports Server (NTRS)

    Koblinsky, C. J.; Ryan, J.; Braatz, L.; Klosko, S. M.

    1993-01-01

    The overall accuracy of the U.S. Navy Geosat altimeter wet atmospheric range delay caused by refraction through the atmosphere is directly assessed by comparing the estimates made from the DMSP Special Sensor Microwave/Imager and the U.S. Navy Fleet Numerical Ocean Center forecast model for Geosat with measurements of total zenith columnar water vapor content from four VLBI sites. The assessment is made by comparing time series of range delay from various methods at each location. To determine the importance of diurnal variation in water vapor content in noncoincident estimates, the VLBI measurements were made at 15-min intervals over a few days. The VLBI measurements showed strong diurnal variations in columnar water vapor at several sites, causing errors of the order 3 cm rms in any noncoincident measurement of the wet troposphere range delay. These errors have an effect on studies of annual and interannual changes in sea level with Geosat data.

  18. Phase correction and error estimation in InSAR time series analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Fattahi, H.; Amelung, F.

    2017-12-01

    During the last decade several InSAR time series approaches have been developed in response to the non-idea acquisition strategy of SAR satellites, such as large spatial and temporal baseline with non-regular acquisitions. The small baseline tubes and regular acquisitions of new SAR satellites such as Sentinel-1 allows us to form fully connected networks of interferograms and simplifies the time series analysis into a weighted least square inversion of an over-determined system. Such robust inversion allows us to focus more on the understanding of different components in InSAR time-series and its uncertainties. We present an open-source python-based package for InSAR time series analysis, called PySAR (https://yunjunz.github.io/PySAR/), with unique functionalities for obtaining unbiased ground displacement time-series, geometrical and atmospheric correction of InSAR data and quantifying the InSAR uncertainty. Our implemented strategy contains several features including: 1) improved spatial coverage using coherence-based network of interferograms, 2) unwrapping error correction using phase closure or bridging, 3) tropospheric delay correction using weather models and empirical approaches, 4) DEM error correction, 5) optimal selection of reference date and automatic outlier detection, 6) InSAR uncertainty due to the residual tropospheric delay, decorrelation and residual DEM error, and 7) variance-covariance matrix of final products for geodetic inversion. We demonstrate the performance using SAR datasets acquired by Cosmo-Skymed and TerraSAR-X, Sentinel-1 and ALOS/ALOS-2, with application on the highly non-linear volcanic deformation in Japan and Ecuador (figure 1). Our result shows precursory deformation before the 2015 eruptions of Cotopaxi volcano, with a maximum uplift of 3.4 cm on the western flank (fig. 1b), with a standard deviation of 0.9 cm (fig. 1a), supporting the finding by Morales-Rivera et al. (2017, GRL); and a post-eruptive subsidence on the same area, with a maximum of -3 +/- 0.9 cm (fig. 1c). Time-series displacement map (fig. 2) shows a highly non-linear deformation behavior, indicating the complicated magma propagation process during this eruption cycle.

  19. The eGFR-C study: accuracy of glomerular filtration rate (GFR) estimation using creatinine and cystatin C and albuminuria for monitoring disease progression in patients with stage 3 chronic kidney disease--prospective longitudinal study in a multiethnic population.

    PubMed

    Lamb, Edmund J; Brettell, Elizabeth A; Cockwell, Paul; Dalton, Neil; Deeks, Jon J; Harris, Kevin; Higgins, Tracy; Kalra, Philip A; Khunti, Kamlesh; Loud, Fiona; Ottridge, Ryan S; Sharpe, Claire C; Sitch, Alice J; Stevens, Paul E; Sutton, Andrew J; Taal, Maarten W

    2014-01-14

    Uncertainty exists regarding the optimal method to estimate glomerular filtration rate (GFR) for disease detection and monitoring. Widely used GFR estimates have not been validated in British ethnic minority populations. Iohexol measured GFR will be the reference against which each estimating equation will be compared. The estimating equations will be based upon serum creatinine and/or cystatin C. The eGFR-C study has 5 components: 1) A prospective longitudinal cohort study of 1300 adults with stage 3 chronic kidney disease followed for 3 years with reference (measured) GFR and test (estimated GFR [eGFR] and urinary albumin-to-creatinine ratio) measurements at baseline and 3 years. Test measurements will also be undertaken every 6 months. The study population will include a representative sample of South-Asians and African-Caribbeans. People with diabetes and proteinuria (ACR ≥30 mg/mmol) will comprise 20-30% of the study cohort.2) A sub-study of patterns of disease progression of 375 people (125 each of Caucasian, Asian and African-Caribbean origin; in each case containing subjects at high and low risk of renal progression). Additional reference GFR measurements will be undertaken after 1 and 2 years to enable a model of disease progression and error to be built.3) A biological variability study to establish reference change values for reference and test measures.4) A modelling study of the performance of monitoring strategies on detecting progression, utilising estimates of accuracy, patterns of disease progression and estimates of measurement error from studies 1), 2) and 3).5) A comprehensive cost database for each diagnostic approach will be developed to enable cost-effectiveness modelling of the optimal strategy.The performance of the estimating equations will be evaluated by assessing bias, precision and accuracy. Data will be modelled as a linear function of time utilising all available (maximum 7) time points compared with the difference between baseline and final reference values. The percentage of participants demonstrating large error with the respective estimating equations will be compared. Predictive value of GFR estimates and albumin-to-creatinine ratio will be compared amongst subjects that do or do not show progressive kidney function decline. The eGFR-C study will provide evidence to inform the optimal GFR estimate to be used in clinical practice. ISRCTN42955626.

  20. Gravity gradient preprocessing at the GOCE HPF

    NASA Astrophysics Data System (ADS)

    Bouman, J.; Rispens, S.; Gruber, T.; Schrama, E.; Visser, P.; Tscherning, C. C.; Veicherts, M.

    2009-04-01

    One of the products derived from the GOCE observations are the gravity gradients. These gravity gradients are provided in the Gradiometer Reference Frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. In order to use these gravity gradients for application in Earth sciences and gravity field analysis, additional pre-processing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and non-tidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/f behaviour for low frequencies. In the outlier detection the 1/f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.

  1. Preprocessing of gravity gradients at the GOCE high-level processing facility

    NASA Astrophysics Data System (ADS)

    Bouman, Johannes; Rispens, Sietse; Gruber, Thomas; Koop, Radboud; Schrama, Ernst; Visser, Pieter; Tscherning, Carl Christian; Veicherts, Martin

    2009-07-01

    One of the products derived from the gravity field and steady-state ocean circulation explorer (GOCE) observations are the gravity gradients. These gravity gradients are provided in the gradiometer reference frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. To use these gravity gradients for application in Earth scienes and gravity field analysis, additional preprocessing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and nontidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/ f behaviour for low frequencies. In the outlier detection, the 1/ f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/ f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low-degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.

  2. A Novel Study Paradigm for Long-term Prevention Trials in Alzheimer Disease: The Placebo Group Simulation Approach (PGSA): Application to MCI data from the NACC database.

    PubMed

    Berres, M; Kukull, W A; Miserez, A R; Monsch, A U; Monsell, S E; Spiegel, R

    2014-01-01

    The PGSA (Placebo Group Simulation Approach) aims at avoiding problems of sample representativeness and ethical issues typical of placebo-controlled secondary prevention trials with MCI patients. The PGSA uses mathematical modeling to forecast the distribution of quantified outcomes of MCI patient groups based on their own baseline data established at the outset of clinical trials. These forecasted distributions are then compared with the distribution of actual outcomes observed on candidate treatments, thus substituting for a concomitant placebo group. Here we investigate whether a PGSA algorithm that was developed from the MCI population of ADNI 1*, can reliably simulate the distribution of composite neuropsychological outcomes from a larger, independently selected MCI subject sample. Data available from the National Alzheimer's Coordinating Center (NACC) were used. We included 1523 patients with single or multiple domain amnestic mild cognitive impairment (aMCI) and at least two follow-ups after baseline. In order to strengthen the analysis and to verify whether there was a drift over time in the neuropsychological outcomes, the NACC subject sample was split into 3 subsamples of similar size. The previously described PGSA algorithm for the trajectory of a composite neuropsychological test battery (NTB) score was adapted to the test battery used in NACC. Nine demographic, clinical, biological and neuropsychological candidate predictors were included in a mixed model; this model and its error terms were used to simulate trajectories of the adapted NTB. The distributions of empirically observed and simulated data after 1, 2 and 3 years were very similar, with some over-estimation of decline in all 3 subgroups. The by far most important predictor of the NTB trajectories is the baseline NTB score. Other significant predictors are the MMSE baseline score and the interactions of time with ApoE4 and FAQ (functional abilities). These are essentially the same predictors as determined for the original NTB score. An algorithm comprising a small number of baseline variables, notably cognitive performance at baseline, forecasts the group trajectory of cognitive decline in subsequent years with high accuracy. The current analysis of 3 independent subgroups of aMCI patients from the NACC database supports the validity of the PGSA longitudinal algorithm for a NTB. Use of the PGSA in long-term secondary AD prevention trials deserves consideration.

  3. A side-by-side comparison of CPV module and system performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, Matthew; Marion, Bill; Kurtz, Sarah

    A side-by-side comparison is made between concentrator photovoltaic module and system direct current aperture efficiency data with a focus on quantifying system performance losses. The individual losses measured/calculated, when combined, are in good agreement with the total loss seen between the module and the system. Results indicate that for the given test period, the largest individual loss of 3.7% relative is due to the baseline performance difference between the individual module and the average for the 200 modules in the system. A basic empirical model is derived based on module spectral performance data and the tabulated losses between the modulemore » and the system. The model predicts instantaneous system direct current aperture efficiency with a root mean square error of 2.3% relative.« less

  4. The effects of partial and full correction of refractive errors on sensorial and motor outcomes in children with refractive accommodative esotropia.

    PubMed

    Sefi-Yurdakul, Nazife; Kaykısız, Hüseyin; Koç, Feray

    2018-03-17

    To investigate the effects of partial and full correction of refractive errors on sensorial and motor outcomes in children with refractive accommodative esotropia (RAE). The records of pediatric cases with full RAE were reviewed; their first and last sensorial and motor findings were evaluated in two groups, classified as partial (Group 1) and full correction (Group 2) of refractive errors. The mean age at first admission was 5.84 ± 3.62 years in Group 1 (n = 35) and 6.35 ± 3.26 years in Group 2 (n = 46) (p = 0.335). Mean change in best corrected visual acuity (BCVA) was 0.24 ± 0.17 logarithm of the minimum angle of resolution (logMAR) in Group 1 and 0.13 ± 0.16 logMAR in Group 2 (p = 0.001). Duration of deviation, baseline refraction and amount of reduced refraction showed significant effects on change in BCVA (p < 0.05). Significant correlation was determined between binocular vision (BOV), duration of deviation and uncorrected baseline amount of deviation (p < 0.05). The baseline BOV rates were significantly high in fully corrected Group 2, and also were found to have increased in Group 1 (p < 0.05). Change in refraction was - 0.09 ± 1.08 and + 0.35 ± 0.76 diopters in Groups 1 and 2, respectively (p = 0.005). Duration of deviation, baseline refraction and the amount of reduced refraction had significant effects on change in refraction (p < 0.05). Change in deviation without refractive correction was - 0.74 ± 7.22 prism diopters in Group 1 and - 3.24 ± 10.41 prism diopters in Group 2 (p = 0.472). Duration of follow-up and uncorrected baseline deviation showed significant effects on change in deviation (p < 0.05). Although the BOV rates and BCVA were initially high in fully corrected patients, they finally improved significantly in both the fully and partially corrected patients. Full hypermetropic correction may also cause an increase in the refractive error with a possible negative effect on emmetropization. The negative effect of the duration of deviation on BOV and BCVA demonstrates the significance of early treatment in RAE cases.

  5. Evaluation of limited sampling models for prediction of oral midazolam AUC for CYP3A phenotyping and drug interaction studies.

    PubMed

    Mueller, Silke C; Drewelow, Bernd

    2013-05-01

    The area under the concentration-time curve (AUC) after oral midazolam administration is commonly used for cytochrome P450 (CYP) 3A phenotyping studies. The aim of this investigation was to evaluate a limited sampling strategy for the prediction of AUC with oral midazolam. A total of 288 concentration-time profiles from 123 healthy volunteers who participated in four previously performed drug interaction studies with intense sampling after a single oral dose of 7.5 mg midazolam were available for evaluation. Of these, 45 profiles served for model building, which was performed by stepwise multiple linear regression, and the remaining 243 datasets served for validation. Mean prediction error (MPE), mean absolute error (MAE) and root mean squared error (RMSE) were calculated to determine bias and precision The one- to four-sampling point models with the best coefficient of correlation were the one-sampling point model (8 h; r (2) = 0.84), the two-sampling point model (0.5 and 8 h; r (2) = 0.93), the three-sampling point model (0.5, 2, and 8 h; r (2) = 0.96), and the four-sampling point model (0.5,1, 2, and 8 h; r (2) = 0.97). However, the one- and two-sampling point models were unable to predict the midazolam AUC due to unacceptable bias and precision. Only the four-sampling point model predicted the very low and very high midazolam AUC of the validation dataset with acceptable precision and bias. The four-sampling point model was also able to predict the geometric mean ratio of the treatment phase over the baseline (with 90 % confidence interval) results of three drug interaction studies in the categories of strong, moderate, and mild induction, as well as no interaction. A four-sampling point limited sampling strategy to predict the oral midazolam AUC for CYP3A phenotyping is proposed. The one-, two- and three-sampling point models were not able to predict midazolam AUC accurately.

  6. An Improved Rank Correlation Effect Size Statistic for Single-Case Designs: Baseline Corrected Tau.

    PubMed

    Tarlow, Kevin R

    2017-07-01

    Measuring treatment effects when an individual's pretreatment performance is improving poses a challenge for single-case experimental designs. It may be difficult to determine whether improvement is due to the treatment or due to the preexisting baseline trend. Tau- U is a popular single-case effect size statistic that purports to control for baseline trend. However, despite its strengths, Tau- U has substantial limitations: Its values are inflated and not bound between -1 and +1, it cannot be visually graphed, and its relatively weak method of trend control leads to unacceptable levels of Type I error wherein ineffective treatments appear effective. An improved effect size statistic based on rank correlation and robust regression, Baseline Corrected Tau, is proposed and field-tested with both published and simulated single-case time series. A web-based calculator for Baseline Corrected Tau is also introduced for use by single-case investigators.

  7. Baseline correction combined partial least squares algorithm and its application in on-line Fourier transform infrared quantitative analysis.

    PubMed

    Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping

    2011-04-01

    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Recruitment into diabetes prevention programs: what is the impact of errors in self-reported measures of obesity?

    PubMed

    Hernan, Andrea; Philpot, Benjamin; Janus, Edward D; Dunbar, James A

    2012-07-08

    Error in self-reported measures of obesity has been frequently described, but the effect of self-reported error on recruitment into diabetes prevention programs is not well established. The aim of this study was to examine the effect of using self-reported obesity data from the Finnish diabetes risk score (FINDRISC) on recruitment into the Greater Green Triangle Diabetes Prevention Project (GGT DPP). The GGT DPP was a structured group-based lifestyle modification program delivered in primary health care settings in South-Eastern Australia. Between 2004-05, 850 FINDRISC forms were collected during recruitment for the GGT DPP. Eligible individuals, at moderate to high risk of developing diabetes, were invited to undertake baseline tests, including anthropometric measurements performed by specially trained nurses. In addition to errors in calculating total risk scores, accuracy of self-reported data (height, weight, waist circumference (WC) and Body Mass Index (BMI)) from FINDRISCs was compared with baseline data, with impact on participation eligibility presented. Overall, calculation errors impacted on eligibility in 18 cases (2.1%). Of n = 279 GGT DPP participants with measured data, errors (total score calculation, BMI or WC) in self-report were found in n = 90 (32.3%). These errors were equally likely to result in under- or over-reported risk. Under-reporting was more common in those reporting lower risk scores (Spearman-rho = -0.226, p-value < 0.001). However, underestimation resulted in only 6% of individuals at high risk of diabetes being incorrectly categorised as moderate or low risk of diabetes. Overall FINDRISC was found to be an effective tool to screen and recruit participants at moderate to high risk of diabetes, accurately categorising levels of overweight and obesity using self-report data. The results could be generalisable to other diabetes prevention programs using screening tools which include self-reported levels of obesity.

  9. Normalized Point Source Sensitivity for Off-Axis Optical Performance Evaluation of the Thirty Meter Telescope

    NASA Technical Reports Server (NTRS)

    Seo, Byoung-Joon; Nissly, Carl; Troy, Mitchell; Angeli, George

    2010-01-01

    The Normalized Point Source Sensitivity (PSSN) has previously been defined and analyzed as an On-Axis seeing-limited telescope performance metric. In this paper, we expand the scope of the PSSN definition to include Off-Axis field of view (FoV) points and apply this generalized metric for performance evaluation of the Thirty Meter Telescope (TMT). We first propose various possible choices for the PSSN definition and select one as our baseline. We show that our baseline metric has useful properties including the multiplicative feature even when considering Off-Axis FoV points, which has proven to be useful for optimizing the telescope error budget. Various TMT optical errors are considered for the performance evaluation including segment alignment and phasing, segment surface figures, temperature, and gravity, whose On-Axis PSSN values have previously been published by our group.

  10. An Ensemble Method for Spelling Correction in Consumer Health Questions

    PubMed Central

    Kilicoglu, Halil; Fiszman, Marcelo; Roberts, Kirk; Demner-Fushman, Dina

    2015-01-01

    Orthographic and grammatical errors are a common feature of informal texts written by lay people. Health-related questions asked by consumers are a case in point. Automatic interpretation of consumer health questions is hampered by such errors. In this paper, we propose a method that combines techniques based on edit distance and frequency counts with a contextual similarity-based method for detecting and correcting orthographic errors, including misspellings, word breaks, and punctuation errors. We evaluate our method on a set of spell-corrected questions extracted from the NLM collection of consumer health questions. Our method achieves a F1 score of 0.61, compared to an informed baseline of 0.29, achieved using ESpell, a spelling correction system developed for biomedical queries. Our results show that orthographic similarity is most relevant in spelling error correction in consumer health questions and that frequency and contextual information are complementary to orthographic features. PMID:26958208

  11. Online Removal of Baseline Shift with a Polynomial Function for Hemodynamic Monitoring Using Near-Infrared Spectroscopy.

    PubMed

    Zhao, Ke; Ji, Yaoyao; Li, Yan; Li, Ting

    2018-01-21

    Near-infrared spectroscopy (NIRS) has become widely accepted as a valuable tool for noninvasively monitoring hemodynamics for clinical and diagnostic purposes. Baseline shift has attracted great attention in the field, but there has been little quantitative study on baseline removal. Here, we aimed to study the baseline characteristics of an in-house-built portable medical NIRS device over a long time (>3.5 h). We found that the measured baselines all formed perfect polynomial functions on phantom tests mimicking human bodies, which were identified by recent NIRS studies. More importantly, our study shows that the fourth-order polynomial function acted to distinguish performance with stable and low-computation-burden fitting calibration (R-square >0.99 for all probes) among second- to sixth-order polynomials, evaluated by the parameters R-square, sum of squares due to error, and residual. This study provides a straightforward, efficient, and quantitatively evaluated solution for online baseline removal for hemodynamic monitoring using NIRS devices.

  12. Modified empirical Solar Radiation Pressure model for IRNSS constellation

    NASA Astrophysics Data System (ADS)

    Rajaiah, K.; Manamohan, K.; Nirmala, S.; Ratnakara, S. C.

    2017-11-01

    Navigation with Indian Constellation (NAVIC) also known as Indian Regional Navigation Satellite System (IRNSS) is India's regional navigation system designed to provide position accuracy better than 20 m over India and the region extending to 1500 km around India. The reduced dynamic precise orbit estimation is utilized to determine the orbit broadcast parameters for IRNSS constellation. The estimation is mainly affected by the parameterization of dynamic models especially Solar Radiation Pressure (SRP) model which is a non-gravitational force depending on shape and attitude dynamics of the spacecraft. An empirical nine parameter solar radiation pressure model is developed for IRNSS constellation, using two-way range measurements from IRNSS C-band ranging system. The paper addresses the development of modified SRP empirical model for IRNSS (IRNSS SRP Empirical Model, ISEM). The performance of the ISEM was assessed based on overlap consistency, long term prediction, Satellite Laser Ranging (SLR) residuals and compared with ECOM9, ECOM5 and new-ECOM9 models developed by Center for Orbit Determination in Europe (CODE). For IRNSS Geostationary Earth Orbit (GEO) and Inclined Geosynchronous Orbit (IGSO) satellites, ISEM has shown promising results with overlap RMS error better than 5.3 m and 3.5 m respectively. Long term orbit prediction using numerical integration has improved with error better than 80%, 26% and 7.8% in comparison to ECOM9, ECOM5 and new-ECOM9 respectively. Further, SLR based orbit determination with ISEM shows 70%, 47% and 39% improvement over 10 days orbit prediction in comparison to ECOM9, ECOM5 and new-ECOM9 respectively and also highlights the importance of wide baseline tracking network.

  13. Residential scene classification for gridded population sampling in developing countries using deep convolutional neural networks on satellite imagery.

    PubMed

    Chew, Robert F; Amer, Safaa; Jones, Kasey; Unangst, Jennifer; Cajka, James; Allpress, Justine; Bruhn, Mark

    2018-05-09

    Conducting surveys in low- and middle-income countries is often challenging because many areas lack a complete sampling frame, have outdated census information, or have limited data available for designing and selecting a representative sample. Geosampling is a probability-based, gridded population sampling method that addresses some of these issues by using geographic information system (GIS) tools to create logistically manageable area units for sampling. GIS grid cells are overlaid to partition a country's existing administrative boundaries into area units that vary in size from 50 m × 50 m to 150 m × 150 m. To avoid sending interviewers to unoccupied areas, researchers manually classify grid cells as "residential" or "nonresidential" through visual inspection of aerial images. "Nonresidential" units are then excluded from sampling and data collection. This process of manually classifying sampling units has drawbacks since it is labor intensive, prone to human error, and creates the need for simplifying assumptions during calculation of design-based sampling weights. In this paper, we discuss the development of a deep learning classification model to predict whether aerial images are residential or nonresidential, thus reducing manual labor and eliminating the need for simplifying assumptions. On our test sets, the model performs comparable to a human-level baseline in both Nigeria (94.5% accuracy) and Guatemala (96.4% accuracy), and outperforms baseline machine learning models trained on crowdsourced or remote-sensed geospatial features. Additionally, our findings suggest that this approach can work well in new areas with relatively modest amounts of training data. Gridded population sampling methods like geosampling are becoming increasingly popular in countries with outdated or inaccurate census data because of their timeliness, flexibility, and cost. Using deep learning models directly on satellite images, we provide a novel method for sample frame construction that identifies residential gridded aerial units. In cases where manual classification of satellite images is used to (1) correct for errors in gridded population data sets or (2) classify grids where population estimates are unavailable, this methodology can help reduce annotation burden with comparable quality to human analysts.

  14. Effects of Serum Creatinine Calibration on Estimated Renal Function in African Americans: the Jackson Heart Study

    PubMed Central

    Wang, Wei; Young, Bessie A.; Fülöp, Tibor; de Boer, Ian H.; Boulware, L. Ebony; Katz, Ronit; Correa, Adolfo; Griswold, Michael E.

    2015-01-01

    Background The calibration to Isotope Dilution Mass Spectroscopy (IDMS) traceable creatinine is essential for valid use of the new Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation to estimate the glomerular filtration rate (GFR). Methods For 5,210 participants in the Jackson Heart Study (JHS), serum creatinine was measured with a multipoint enzymatic spectrophotometric assay at the baseline visit (2000–2004) and re-measured using the Roche enzymatic method, traceable to IDMS in a subset of 206 subjects. The 200 eligible samples (6 were excluded, 1 for failure of the re-measurement and 5 for outliers) were divided into three disjoint sets - training, validation, and test - to select a calibration model, estimate true errors, and assess performance of the final calibration equation. The calibration equation was applied to serum creatinine measurements of 5,210 participants to estimate GFR and the prevalence of CKD. Results The selected Deming regression model provided a slope of 0.968 (95% Confidence Interval (CI), 0.904 to 1.053) and intercept of −0.0248 (95% CI, −0.0862 to 0.0366) with R squared 0.9527. Calibrated serum creatinine showed high agreement with actual measurements when applying to the unused test set (concordance correlation coefficient 0.934, 95% CI, 0.894 to 0.960). The baseline prevalence of CKD in the JHS (2000–2004) was 6.30% using calibrated values, compared with 8.29% using non-calibrated serum creatinine with the CKD-EPI equation (P < 0.001). Conclusions A Deming regression model was chosen to optimally calibrate baseline serum creatinine measurements in the JHS and the calibrated values provide a lower CKD prevalence estimate. PMID:25806862

  15. Effects of serum creatinine calibration on estimated renal function in african americans: the Jackson heart study.

    PubMed

    Wang, Wei; Young, Bessie A; Fülöp, Tibor; de Boer, Ian H; Boulware, L Ebony; Katz, Ronit; Correa, Adolfo; Griswold, Michael E

    2015-05-01

    The calibration to isotope dilution mass spectrometry-traceable creatinine is essential for valid use of the new Chronic Kidney Disease Epidemiology Collaboration equation to estimate the glomerular filtration rate. For 5,210 participants in the Jackson Heart Study (JHS), serum creatinine was measured with a multipoint enzymatic spectrophotometric assay at the baseline visit (2000-2004) and remeasured using the Roche enzymatic method, traceable to isotope dilution mass spectrometry in a subset of 206 subjects. The 200 eligible samples (6 were excluded, 1 for failure of the remeasurement and 5 for outliers) were divided into 3 disjoint sets-training, validation and test-to select a calibration model, estimate true errors and assess performance of the final calibration equation. The calibration equation was applied to serum creatinine measurements of 5,210 participants to estimate glomerular filtration rate and the prevalence of chronic kidney disease (CKD). The selected Deming regression model provided a slope of 0.968 (95% confidence interval [CI], 0.904-1.053) and intercept of -0.0248 (95% CI, -0.0862 to 0.0366) with R value of 0.9527. Calibrated serum creatinine showed high agreement with actual measurements when applying to the unused test set (concordance correlation coefficient 0.934, 95% CI, 0.894-0.960). The baseline prevalence of CKD in the JHS (2000-2004) was 6.30% using calibrated values compared with 8.29% using noncalibrated serum creatinine with the Chronic Kidney Disease Epidemiology Collaboration equation (P < 0.001). A Deming regression model was chosen to optimally calibrate baseline serum creatinine measurements in the JHS, and the calibrated values provide a lower CKD prevalence estimate.

  16. Gated integrator with signal baseline subtraction

    DOEpatents

    Wang, X.

    1996-12-17

    An ultrafast, high precision gated integrator includes an opamp having differential inputs. A signal to be integrated is applied to one of the differential inputs through a first input network, and a signal indicative of the DC offset component of the signal to be integrated is applied to the other of the differential inputs through a second input network. A pair of electronic switches in the first and second input networks define an integrating period when they are closed. The first and second input networks are substantially symmetrically constructed of matched components so that error components introduced by the electronic switches appear symmetrically in both input circuits and, hence, are nullified by the common mode rejection of the integrating opamp. The signal indicative of the DC offset component is provided by a sample and hold circuit actuated as the integrating period begins. The symmetrical configuration of the integrating circuit improves accuracy and speed by balancing out common mode errors, by permitting the use of high speed switching elements and high speed opamps and by permitting the use of a small integrating time constant. The sample and hold circuit substantially eliminates the error caused by the input signal baseline offset during a single integrating window. 5 figs.

  17. Gated integrator with signal baseline subtraction

    DOEpatents

    Wang, Xucheng

    1996-01-01

    An ultrafast, high precision gated integrator includes an opamp having differential inputs. A signal to be integrated is applied to one of the differential inputs through a first input network, and a signal indicative of the DC offset component of the signal to be integrated is applied to the other of the differential inputs through a second input network. A pair of electronic switches in the first and second input networks define an integrating period when they are closed. The first and second input networks are substantially symmetrically constructed of matched components so that error components introduced by the electronic switches appear symmetrically in both input circuits and, hence, are nullified by the common mode rejection of the integrating opamp. The signal indicative of the DC offset component is provided by a sample and hold circuit actuated as the integrating period begins. The symmetrical configuration of the integrating circuit improves accuracy and speed by balancing out common mode errors, by permitting the use of high speed switching elements and high speed opamps and by permitting the use of a small integrating time constant. The sample and hold circuit substantially eliminates the error caused by the input signal baseline offset during a single integrating window.

  18. Rapid Ice Loss at Vatnajokull,Iceland Since Late 1990s Constrained by Synthetic Aperture Radar Interferometry

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Amelung, F.; Dixon, T. H.; Wdowinski, S.

    2012-12-01

    Synthetic aperture radar interferometry time series is applied over Vatnajokull, Iceland by using 15 years ERS data. Ice loss at Vatnajokull accelerates since late 1990s especially after 21th century. Clear uplift signal due to ice mass loss is detected. The rebound signal is generally linear and increases a little bit after 2000. The relative annual velocity (GPS station 7485 as reference) is about 12 mm/yr at the ice cap edge, which matches the previous studies using GPS. The standard deviation compared to 11 GPS stations in this area is about 2 mm/yr. A relative-value modeling method ignoring the effect of viscous flow is chosen assuming elastic half space earth. The final ice loss estimation - 83 cm/yr - matches the climatology model with ground observations. Small Baseline Subsets is applied for time series analysis. Orbit error coupling with long wavelength phase trend due to horizontal plate motion is removed based on a second polynomial model. For simplicity, we do not consider atmospheric delay in this area because of no complex topography and small-scale turbulence is eliminated well after long-term average when calculating the annual mean velocity. Some unwrapping error still exits because of low coherence. Other uncertainties can be the basic assumption of ice loss pattern and spatial variation of the elastic parameters. It is the first time we apply InSAR time series for ice mass balance study and provide detailed error and uncertainty analysis. The successful of this application proves InSAR as an option for mass balance study and it is also important for validation of different ice loss estimation techniques.

  19. Improved Stratospheric Temperature Retrievals for Climate Reanalysis

    NASA Technical Reports Server (NTRS)

    Rokke, L.; Joiner, J.

    1999-01-01

    The Data Assimilation Office (DAO) is embarking on plans to generate a twenty year reanalysis data set of climatic atmospheric variables. One of the focus points will be in the evaluation of the dynamics of the stratosphere. The Stratospheric Sounding Unit (SSU), flown as part of the TIROS Operational Vertical Sounder (TOVS), is one of the primary stratospheric temperature sensors flown consistently throughout the reanalysis period. Seven unique sensors made the measurements over time, with individual instrument characteristics that need to be addressed. The stratospheric temperatures being assimilated across satellite platforms will profoundly impact the reanalysis dynamical fields. To attempt to quantify aspects of instrument and retrieval bias we are carefully collecting and analyzing all available information on the sensors, their instrument anomalies, forward model errors and retrieval biases. For the retrieval of stratospheric temperatures, we adapted the minimum variance approach of Jazwinski (1970) and Rodgers (1976) and applied it to the SSU soundings. In our algorithm, the state vector contains an initial guess of temperature from a model six hour forecast provided by the Goddard EOS Data Assimilation System (GEOS/DAS). This is combined with an a priori covariance matrix, a forward model parameterization, and specifications of instrument noise characteristics. A quasi-Newtonian iteration is used to obtain convergence of the retrieved state to the measurement vector. This algorithm also enables us to analyze and address the systematic errors associated with the unique characteristics of the cell pressures on the individual SSU instruments and the resolving power of the instruments to vertical gradients in the stratosphere. The preliminary results of the improved retrievals and their assimilation as well as baseline calculations of bias and rms error between the NESDIS operational product and col-located ground measurements will be presented.

  20. Uncertainty Assessment of the NASA Earth Exchange Global Daily Downscaled Climate Projections (NEX-GDDP) Dataset

    NASA Technical Reports Server (NTRS)

    Wang, Weile; Nemani, Ramakrishna R.; Michaelis, Andrew; Hashimoto, Hirofumi; Dungan, Jennifer L.; Thrasher, Bridget L.; Dixon, Keith W.

    2016-01-01

    The NASA Earth Exchange Global Daily Downscaled Projections (NEX-GDDP) dataset is comprised of downscaled climate projections that are derived from 21 General Circulation Model (GCM) runs conducted under the Coupled Model Intercomparison Project Phase 5 (CMIP5) and across two of the four greenhouse gas emissions scenarios (RCP4.5 and RCP8.5). Each of the climate projections includes daily maximum temperature, minimum temperature, and precipitation for the periods from 1950 through 2100 and the spatial resolution is 0.25 degrees (approximately 25 km x 25 km). The GDDP dataset has received warm welcome from the science community in conducting studies of climate change impacts at local to regional scales, but a comprehensive evaluation of its uncertainties is still missing. In this study, we apply the Perfect Model Experiment framework (Dixon et al. 2016) to quantify the key sources of uncertainties from the observational baseline dataset, the downscaling algorithm, and some intrinsic assumptions (e.g., the stationary assumption) inherent to the statistical downscaling techniques. We developed a set of metrics to evaluate downscaling errors resulted from bias-correction ("quantile-mapping"), spatial disaggregation, as well as the temporal-spatial non-stationarity of climate variability. Our results highlight the spatial disaggregation (or interpolation) errors, which dominate the overall uncertainties of the GDDP dataset, especially over heterogeneous and complex terrains (e.g., mountains and coastal area). In comparison, the temporal errors in the GDDP dataset tend to be more constrained. Our results also indicate that the downscaled daily precipitation also has relatively larger uncertainties than the temperature fields, reflecting the rather stochastic nature of precipitation in space. Therefore, our results provide insights in improving statistical downscaling algorithms and products in the future.

  1. Quality improvement of International Classification of Diseases, 9th revision, diagnosis coding in radiation oncology: single-institution prospective study at University of California, San Francisco.

    PubMed

    Chen, Chien P; Braunstein, Steve; Mourad, Michelle; Hsu, I-Chow J; Haas-Kogan, Daphne; Roach, Mack; Fogh, Shannon E

    2015-01-01

    Accurate International Classification of Diseases (ICD) diagnosis coding is critical for patient care, billing purposes, and research endeavors. In this single-institution study, we evaluated our baseline ICD-9 (9th revision) diagnosis coding accuracy, identified the most common errors contributing to inaccurate coding, and implemented a multimodality strategy to improve radiation oncology coding. We prospectively studied ICD-9 coding accuracy in our radiation therapy--specific electronic medical record system. Baseline ICD-9 coding accuracy was obtained from chart review targeting ICD-9 coding accuracy of all patients treated at our institution between March and June of 2010. To improve performance an educational session highlighted common coding errors, and a user-friendly software tool, RadOnc ICD Search, version 1.0, for coding radiation oncology specific diagnoses was implemented. We then prospectively analyzed ICD-9 coding accuracy for all patients treated from July 2010 to June 2011, with the goal of maintaining 80% or higher coding accuracy. Data on coding accuracy were analyzed and fed back monthly to individual providers. Baseline coding accuracy for physicians was 463 of 661 (70%) cases. Only 46% of physicians had coding accuracy above 80%. The most common errors involved metastatic cases, whereby primary or secondary site ICD-9 codes were either incorrect or missing, and special procedures such as stereotactic radiosurgery cases. After implementing our project, overall coding accuracy rose to 92% (range, 86%-96%). The median accuracy for all physicians was 93% (range, 77%-100%) with only 1 attending having accuracy below 80%. Incorrect primary and secondary ICD-9 codes in metastatic cases showed the most significant improvement (10% vs 2% after intervention). Identifying common coding errors and implementing both education and systems changes led to significantly improved coding accuracy. This quality assurance project highlights the potential problem of ICD-9 coding accuracy by physicians and offers an approach to effectively address this shortcoming. Copyright © 2015. Published by Elsevier Inc.

  2. Inferring biogeochemistry past: a millennial-scale multimodel assimilation of multiple paleoecological proxies.

    NASA Astrophysics Data System (ADS)

    Dietze, M.; Raiho, A.; Fer, I.; Dawson, A.; Heilman, K.; Hooten, M.; McLachlan, J. S.; Moore, D. J.; Paciorek, C. J.; Pederson, N.; Rollinson, C.; Tipton, J.

    2017-12-01

    The pre-industrial period serves as an essential baseline against which we judge anthropogenic impacts on the earth's systems. However, direct measurements of key biogeochemical processes, such as carbon, water, and nutrient cycling, are absent for this period and there is no direct way to link paleoecological proxies, such as pollen and tree rings, to these processes. Process-based terrestrial ecosystem models provide a way to make inferences about the past, but have large uncertainties and by themselves often fail to capture much of the observed variability. Here we investigate the ability to improve inferences about pre-industrial biogeochemical cycles through the formal assimilation of proxy data into multiple process-based models. A Tobit ensemble filter with explicit estimation of process error was run at five sites across the eastern US for three models (LINKAGES, ED2, LPJ-GUESS). In addition to process error, the ensemble accounted for parameter uncertainty, estimated through the assimilation of the TRY and BETY trait databases, and driver uncertainty, accommodated by probabilistically downscaling and debiasing CMIP5 GCM output then filtering based on paleoclimate reconstructions. The assimilation was informed by four PalEON data products, each of which includes an explicit Bayesian error estimate: (1) STEPPS forest composition estimated from fossil pollen; (2) REFAB aboveground biomass (AGB) estimated from fossil pollen; (3) tree ring AGB and woody net primary productivity (wNPP); and (4) public land survey composition, stem density, and AGB. By comparing ensemble runs with and without data assimilation we are able to assess the information contribution of the proxy data to constraining biogeochemical fluxes, which is driven by the combination of model uncertainty, data uncertainty, and the strength of correlation between observed and unobserved quantities in the model ensemble. To our knowledge this is the first attempt at multi-model data assimilation with terrestrial ecosystem models. Results from the data-model assimilation allow us to assess the consistency across models in post-assimilation inferences about indirectly inferred quantities, such as GPP, soil carbon, and the water budget.

  3. An a priori model for the reduction of nutation observations: KSV(1994.3) nutation series

    NASA Technical Reports Server (NTRS)

    Herring, T. A.

    1995-01-01

    We discuss the formulation of a new nutation series to be used in the reduction of modern space geodetic data. The motivation for developing such a series is to develop a nutation series that has smaller short period errors than the IAU 1980 nutation series and to provide a series that can be used with techniques such as the Global Positioning System (GPS) that have sensitivity to nutations but can directly separate the effects of nutations from errors in the dynamical force models that effect the satellite orbits. A modern nutation series should allow the errors in the force models for GPS to be better understood. The series is constructed by convolving the Kinoshita and Souchay rigid Earth nutation series with an Earth response function whose parameters are partly based on geophysical models of the Earth and partly estimated from a long series (1979-1993) of very long baseline interferometry (VLBI) estimates of nutation angles. Secular rates of change of the nutation angles to represent corrections to the precession constant and a secular change of the obliquity of the ecliptic are included in the theory. Time dependent amplitudes of the Free Core Nutation (FCN) that is most likely excited by variations in atmospheric pressure are included when the geophysical parameters are estimated. The complex components of the prograde annual nutation are estimated simultaneously with the geophysical parameters because of the large contribution to the nutation from the S(sub 1) atmospheric tide. The weighted root mean square (WRMS) scatter of the nutation angle estimates about this new model are 0.32 mas and the largest correction to the series when the amplitudes of the ten largest nutations are estimated is 0.18 +/- 0.03 mas for the in phase component of the prograde 18. 6 year nutation.

  4. Impact of Robotic Antineoplastic Preparation on Safety, Workflow, and Costs

    PubMed Central

    Seger, Andrew C.; Churchill, William W.; Keohane, Carol A.; Belisle, Caryn D.; Wong, Stephanie T.; Sylvester, Katelyn W.; Chesnick, Megan A.; Burdick, Elisabeth; Wien, Matt F.; Cotugno, Michael C.; Bates, David W.; Rothschild, Jeffrey M.

    2012-01-01

    Purpose: Antineoplastic preparation presents unique safety concerns and consumes significant pharmacy staff time and costs. Robotic antineoplastic and adjuvant medication compounding may provide incremental safety and efficiency advantages compared with standard pharmacy practices. Methods: We conducted a direct observation trial in an academic medical center pharmacy to compare the effects of usual/manual antineoplastic and adjuvant drug preparation (baseline period) with robotic preparation (intervention period). The primary outcomes were serious medication errors and staff safety events with the potential for harm of patients and staff, respectively. Secondary outcomes included medication accuracy determined by gravimetric techniques, medication preparation time, and the costs of both ancillary materials used during drug preparation and personnel time. Results: Among 1,421 and 972 observed medication preparations, we found nine (0.7%) and seven (0.7%) serious medication errors (P = .8) and 73 (5.1%) and 28 (2.9%) staff safety events (P = .007) in the baseline and intervention periods, respectively. Drugs failed accuracy measurements in 12.5% (23 of 184) and 0.9% (one of 110) of preparations in the baseline and intervention periods, respectively (P < .001). Mean drug preparation time increased by 47% when using the robot (P = .009). Labor costs were similar in both study periods, although the ancillary material costs decreased by 56% in the intervention period (P < .001). Conclusion: Although robotically prepared antineoplastic and adjuvant medications did not reduce serious medication errors, both staff safety and accuracy of medication preparation were improved significantly. Future studies are necessary to address the overall cost effectiveness of these robotic implementations. PMID:23598843

  5. Extragalactic radio sources - Accurate positions from very-long-baseline interferometry observations

    NASA Technical Reports Server (NTRS)

    Rogers, A. E. E.; Counselman, C. C., III; Hinteregger, H. F.; Knight, C. A.; Robertson, D. S.; Shapiro, I. I.; Whitney, A. R.; Clark, T. A.

    1973-01-01

    Relative positions for 12 extragalactic radio sources have been determined via wide-band very-long-baseline interferometry (wavelength of about 3.8 cm). The standard error, based on consistency between results from widely separated periods of observation, appears to be no more than 0.1 sec for each coordinate of the seven sources that were well observed during two or more periods. The uncertainties in the coordinates determined for the other five sources are larger, but in no case exceed 0.5 sec.

  6. The association of longitudinal trend of fasting plasma glucose with retinal microvasculature in people without established diabetes.

    PubMed

    Hu, Yin; Niu, Yong; Wang, Dandan; Wang, Ying; Holden, Brien A; He, Mingguang

    2015-01-22

    Structural changes of retinal vasculature, such as altered retinal vascular calibers, are considered as early signs of systemic vascular damage. We examined the associations of 5-year mean level, longitudinal trend, and fluctuation in fasting plasma glucose (FPG) with retinal vascular caliber in people without established diabetes. A prospective study was conducted in a cohort of Chinese people age ≥40 years in Guangzhou, southern China. The FPG was measured at baseline in 2008 and annually until 2012. In 2012, retinal vascular caliber was assessed using standard fundus photographs and validated software. A total of 3645 baseline nondiabetic participants with baseline and follow-up data on FPG for 3 or more visits was included for statistical analysis. The associations of retinal vascular caliber with 5-year mean FPG level, longitudinal FPG trend (slope of linear regression-FPG), and fluctuation (standard deviation and root mean square error of FPG) were analyzed using multivariable linear regression analyses. Multivariate regression models adjusted for baseline FPG and other potential confounders showed that a 10% annual increase in FPG was associated independently with a 2.65-μm narrowing in retinal arterioles (P = 0.008) and a 3.47-μm widening in venules (P = 0. 0.004). Associations with mean FPG level and fluctuation were not statistically significant. Annual rising trend in FPG, but not its mean level or fluctuation, is associated with altered retinal vasculature in nondiabetic people. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.

  7. Covariance analysis of the airborne laser ranging system

    NASA Technical Reports Server (NTRS)

    Englar, T. S., Jr.; Hammond, C. L.; Gibbs, B. P.

    1981-01-01

    The requirements and limitations of employing an airborne laser ranging system for detecting crustal shifts of the Earth within centimeters over a region of approximately 200 by 400 km are presented. The system consists of an aircraft which flies over a grid of ground deployed retroreflectors, making six passes over the grid at two different altitudes. The retroreflector baseline errors are assumed to result from measurement noise, a priori errors on the aircraft and retroreflector positions, tropospheric refraction, and sensor biases.

  8. Space shuttle navigation analysis

    NASA Technical Reports Server (NTRS)

    Jones, H. L.; Luders, G.; Matchett, G. A.; Sciabarrasi, J. E.

    1976-01-01

    A detailed analysis of space shuttle navigation for each of the major mission phases is presented. A covariance analysis program for prelaunch IMU calibration and alignment for the orbital flight tests (OFT) is described, and a partial error budget is presented. The ascent, orbital operations and deorbit maneuver study considered GPS-aided inertial navigation in the Phase III GPS (1984+) time frame. The entry and landing study evaluated navigation performance for the OFT baseline system. Detailed error budgets and sensitivity analyses are provided for both the ascent and entry studies.

  9. White Matter Integrity and Treatment-Based Change in Speech Performance in Minimally Verbal Children with Autism Spectrum Disorder.

    PubMed

    Chenausky, Karen; Kernbach, Julius; Norton, Andrea; Schlaug, Gottfried

    2017-01-01

    We investigated the relationship between imaging variables for two language/speech-motor tracts and speech fluency variables in 10 minimally verbal (MV) children with autism. Specifically, we tested whether measures of white matter integrity-fractional anisotropy (FA) of the arcuate fasciculus (AF) and frontal aslant tract (FAT)-were related to change in percent syllable-initial consonants correct, percent items responded to, and percent syllable insertion errors (from best baseline to post 25 treatment sessions). Twenty-three MV children with autism spectrum disorder (ASD) received Auditory-Motor Mapping Training (AMMT), an intonation-based treatment to improve fluency in spoken output, and we report on seven who received a matched control treatment. Ten of the AMMT participants were able to undergo a magnetic resonance imaging study at baseline; their performance on baseline speech production measures is compared to that of the other two groups. No baseline differences were found between groups. A canonical correlation analysis (CCA) relating FA values for left- and right-hemisphere AF and FAT to speech production measures showed that FA of the left AF and right FAT were the largest contributors to the synthetic independent imaging-related variable. Change in percent syllable-initial consonants correct and percent syllable-insertion errors were the largest contributors to the synthetic dependent fluency-related variable. Regression analyses showed that FA values in left AF significantly predicted change in percent syllable-initial consonants correct, no FA variables significantly predicted change in percent items responded to, and FA of right FAT significantly predicted change in percent syllable-insertion errors. Results are consistent with previously identified roles for the AF in mediating bidirectional mapping between articulation and acoustics, and the FAT in its relationship to speech initiation and fluency. They further suggest a division of labor between the hemispheres, implicating the left hemisphere in accuracy of speech production and the right hemisphere in fluency in this population. Changes in response rate are interpreted as stemming from factors other than the integrity of these two fiber tracts. This study is the first to document the existence of a subgroup of MV children who experience increases in syllable- insertion errors as their speech develops in response to therapy.

  10. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections

    PubMed Central

    Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.

    2018-01-01

    Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737

  11. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    PubMed

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.

  12. Understanding seasonal variability of uncertainty in hydrological prediction

    NASA Astrophysics Data System (ADS)

    Li, M.; Wang, Q. J.

    2012-04-01

    Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.

  13. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  14. Reducing RANS Model Error Using Random Forest

    NASA Astrophysics Data System (ADS)

    Wang, Jian-Xun; Wu, Jin-Long; Xiao, Heng; Ling, Julia

    2016-11-01

    Reynolds-Averaged Navier-Stokes (RANS) models are still the work-horse tools in the turbulence modeling of industrial flows. However, the model discrepancy due to the inadequacy of modeled Reynolds stresses largely diminishes the reliability of simulation results. In this work we use a physics-informed machine learning approach to improve the RANS modeled Reynolds stresses and propagate them to obtain the mean velocity field. Specifically, the functional forms of Reynolds stress discrepancies with respect to mean flow features are trained based on an offline database of flows with similar characteristics. The random forest model is used to predict Reynolds stress discrepancies in new flows. Then the improved Reynolds stresses are propagated to the velocity field via RANS equations. The effects of expanding the feature space through the use of a complete basis of Galilean tensor invariants are also studied. The flow in a square duct, which is challenging for standard RANS models, is investigated to demonstrate the merit of the proposed approach. The results show that both the Reynolds stresses and the propagated velocity field are improved over the baseline RANS predictions. SAND Number: SAND2016-7437 A

  15. Significant motions between GPS sites in the New Madrid region: implications for seismic hazard

    USGS Publications Warehouse

    Frankel, Arthur; Smalley, Robert; Paul, J.

    2012-01-01

    Position time series from Global Positioning System (GPS) stations in the New Madrid region were differenced to determine the relative motions between stations. Uncertainties in rates were estimated using a three‐component noise model consisting of white, flicker, and random walk noise, following the methodology of Langbein, 2004. Significant motions of 0.37±0.07 (one standard error) mm/yr were found between sites PTGV and STLE, for which the baseline crosses the inferred deep portion of the Reelfoot fault. Baselines between STLE and three other sites also show significant motion. Site MCTY (adjacent to STLE) also exhibits significant motion with respect to PTGV. These motions are consistent with a model of interseismic slip of about 4  mm/yr on the Reelfoot fault at depths between 12 and 20 km. If constant over time, this rate of slip produces sufficient slip for an M 7.3 earthquake on the shallow portion of the Reelfoot fault, using the geologically derived recurrence time of 500 years. This model assumes that the shallow portion of the fault has been previously loaded by the intraplate stress. A GPS site near Little Rock, Arkansas, shows significant southward motion of 0.3–0.4  mm/yr (±0.08  mm/yr) relative to three sites to the north, indicating strain consistent with focal mechanisms of earthquake swarms in northern Arkansas.

  16. Performance evaluation of GIM-TEC assimilation of the IRI-Plas model at two equatorial stations in the American sector

    NASA Astrophysics Data System (ADS)

    Adebiyi, S. J.; Adebesin, B. O.; Ikubanni, S. O.; Joshua, B. W.

    2017-05-01

    Empirical models of the ionosphere, such as the International Reference Ionosphere (IRI) model, play a vital role in evaluating the environmental effect on the operation of space-based communication and navigation technologies. The IRI extended to Plasmasphere (IRI-Plas) model can be adjusted with external data to update its electron density profile while still maintaining the overall integrity of the model representations. In this paper, the performance of the total electron content (TEC) assimilation option of the IRI-Plas at two equatorial stations, Jicamarca, Peru (geographic: 12°S, 77°W, dip angle 0.8°) and Cachoeira Paulista, Brazil (Geographic: 22.7°S, 45°W, dip angle -26°), is examined during quiet and disturbed conditions. TEC, F2 layer critical frequency (foF2), and peak height (hmF2) predicted when the model is operated without external input were used as a baseline in our model evaluation. Results indicate that TEC predicted by the assimilation option generally produced smaller estimation errors compared to the "no extra input" option during quiet and disturbed conditions. Generally, the error is smaller at the equatorial trough than near the crest for both quiet and disturbed days. With assimilation option, there is a substantial improvement of storm time estimations when compared with quiet time predictions. The improvement is, however, independent on storm's severity. Furthermore, the modeled foF2 and hmF2 are generally poor with TEC assimilation, particularly the hmF2 prediction, at the two locations during both quiet and disturbed conditions. Consequently, IRI-Plas model assimilated with TEC value only may not be sufficient where more realistic instantaneous values of peak parameters are required.

  17. Evaluating the Utility of Remotely-Sensed Soil Moisture Retrievals for Operational Agricultural Drought Monitoring

    NASA Technical Reports Server (NTRS)

    Bolten, John D.; Crow, Wade T.; Zhan, Xiwu; Jackson, Thomas J.; Reynolds,Curt

    2010-01-01

    Soil moisture is a fundamental data source used by the United States Department of Agriculture (USDA) International Production Assessment Division (IPAD) to monitor crop growth stage and condition and subsequently, globally forecast agricultural yields. Currently, the USDA IPAD estimates surface and root-zone soil moisture using a two-layer modified Palmer soil moisture model forced by global precipitation and temperature measurements. However, this approach suffers from well-known errors arising from uncertainty in model forcing data and highly simplified model physics. Here we attempt to correct for these errors by designing and applying an Ensemble Kalman filter (EnKF) data assimilation system to integrate surface soil moisture retrievals from the NASA Advanced Microwave Scanning Radiometer (AMSR-E) into the USDA modified Palmer soil moisture model. An assessment of soil moisture analysis products produced from this assimilation has been completed for a five-year (2002 to 2007) period over the North American continent between 23degN - 50degN and 128degW - 65degW. In particular, a data denial experimental approach is utilized to isolate the added utility of integrating remotely-sensed soil moisture by comparing EnKF soil moisture results obtained using (relatively) low-quality precipitation products obtained from real-time satellite imagery to baseline Palmer model runs forced with higher quality rainfall. An analysis of root-zone anomalies for each model simulation suggests that the assimilation of AMSR-E surface soil moisture retrievals can add significant value to USDA root-zone predictions derived from real-time satellite precipitation products.

  18. Performance Metrics, Error Modeling, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling

    2016-01-01

    A common set of statistical metrics has been used to summarize the performance of models or measurements-­ the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying un­certainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling meth­odology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.

  19. Frequency standards requirements of the NASA deep space network to support outer planet missions

    NASA Technical Reports Server (NTRS)

    Fliegel, H. F.; Chao, C. C.

    1974-01-01

    Navigation of Mariner spacecraft to Jupiter and beyond will require greater accuracy of positional determination than heretofore obtained if the full experimental capabilities of this type of spacecraft are to be utilized. Advanced navigational techniques which will be available by 1977 include Very Long Baseline Interferometry (VLBI), three-way Doppler tracking (sometimes called quasi-VLBI), and two-way Doppler tracking. It is shown that VLBI and quasi-VLBI methods depend on the same basic concept, and that they impose nearly the same requirements on the stability of frequency standards at the tracking stations. It is also shown how a realistic modelling of spacecraft navigational errors prevents overspecifying the requirements to frequency stability.

  20. Impact of a reengineered electronic error-reporting system on medication event reporting and care process improvements at an urban medical center.

    PubMed

    McKaig, Donald; Collins, Christine; Elsaid, Khaled A

    2014-09-01

    A study was conducted to evaluate the impact of a reengineered approach to electronic error reporting at a 719-bed multidisciplinary urban medical center. The main outcome of interest was the monthly reported medication errors during the preimplementation (20 months) and postimplementation (26 months) phases. An interrupted time series analysis was used to describe baseline errors, immediate change following implementation of the current electronic error-reporting system (e-ERS), and trend of error reporting during postimplementation. Errors were categorized according to severity using the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) Medication Error Index classifications. Reported errors were further analyzed by reporter and error site. During preimplementation, the monthly reported errors mean was 40.0 (95% confidence interval [CI]: 36.3-43.7). Immediately following e-ERS implementation, monthly reported errors significantly increased by 19.4 errors (95% CI: 8.4-30.5). The change in slope of reported errors trend was estimated at 0.76 (95% CI: 0.07-1.22). Near misses and no-patient-harm errors accounted for 90% of all errors, while errors that caused increased patient monitoring or temporary harm accounted for 9% and 1%, respectively. Nurses were the most frequent reporters, while physicians were more likely to report high-severity errors. Medical care units accounted for approximately half of all reported errors. Following the intervention, there was a significant increase in reporting of prevented errors and errors that reached the patient with no resultant harm. This improvement in reporting was sustained for 26 months and has contributed to designing and implementing quality improvement initiatives to enhance the safety of the medication use process.

  1. Autonomous Navigation Error Propagation Assessment for Lunar Surface Mobility Applications

    NASA Technical Reports Server (NTRS)

    Welch, Bryan W.; Connolly, Joseph W.

    2006-01-01

    The NASA Vision for Space Exploration is focused on the return of astronauts to the Moon. While navigation systems have already been proven in the Apollo missions to the moon, the current exploration campaign will involve more extensive and extended missions requiring new concepts for lunar navigation. In this document, the results of an autonomous navigation error propagation assessment are provided. The analysis is intended to be the baseline error propagation analysis for which Earth-based and Lunar-based radiometric data are added to compare these different architecture schemes, and quantify the benefits of an integrated approach, in how they can handle lunar surface mobility applications when near the Lunar South pole or on the Lunar Farside.

  2. Speech Correction for Children with Cleft Lip and Palate by Networking of Community-Based Care.

    PubMed

    Hanchanlert, Yotsak; Pramakhatay, Worawat; Pradubwong, Suteera; Prathanee, Benjamas

    2015-08-01

    Prevalence of cleft lip and palate (CLP) is high in Northeast Thailand. Most children with CLP face many problems, particularly compensatory articulation disorders (CAD) beyond surgery while speech services and the number of speech and language pathologists (SLPs) are limited. To determine the effectiveness of networking of Khon Kaen University (KKU) Community-Based Speech Therapy Model: Kosumphisai Hospital, Kosumphisai District and Maha Sarakham Hospital, Mueang District, Maha Sarakham Province for reduction of the number of articulations errors for children with CLP. Eleven children with CLP were recruited in 3 1-year projects of KKU Community-Based Speech Therapy Model. Articulation tests were formally assessed by qualified language pathologists (SLPs) for baseline and post treatment outcomes. Teachings on services for speech assistants (SAs) were conducted by SLPs. Assigned speech correction (SC) was performed by SAs at home and at local hospitals. Caregivers also gave SC at home 3-4 days a week. Networking of Community-Based Speech Therapy Model signficantly reduced the number of articulation errors for children with CLP in both word and sentence levels (mean difference = 6.91, 95% confidence interval = 4.15-9.67; mean difference = 5.36, 95% confidence interval = 2.99-7.73, respectively). Networking by Kosumphisai and Maha Sarakham of KKU Community-Based Speech Therapy Model was a valid and efficient method for providing speech services for children with cleft palate and could be extended to any area in Thailand and other developing countries, where have similar contexts.

  3. Dynamic regulation of heart rate during acute hypotension: new insight into baroreflex function

    NASA Technical Reports Server (NTRS)

    Zhang, R.; Behbehani, K.; Crandall, C. G.; Zuckerman, J. H.; Levine, B. D.; Blomqvist, C. G. (Principal Investigator)

    2001-01-01

    To examine the dynamic properties of baroreflex function, we measured beat-to-beat changes in arterial blood pressure (ABP) and heart rate (HR) during acute hypotension induced by thigh cuff deflation in 10 healthy subjects under supine resting conditions and during progressive lower body negative pressure (LBNP). The quantitative, temporal relationship between ABP and HR was fitted by a second-order autoregressive (AR) model. The frequency response was evaluated by transfer function analysis. Results: HR changes during acute hypotension appear to be controlled by an ABP error signal between baseline and induced hypotension. The quantitative relationship between changes in ABP and HR is characterized by a second-order AR model with a pure time delay of 0.75 s containing low-pass filter properties. During LBNP, the change in HR/change in ABP during induced hypotension significantly decreased, as did the numerator coefficients of the AR model and transfer function gain. Conclusions: 1) Beat-to-beat HR responses to dynamic changes in ABP may be controlled by an error signal rather than directional changes in pressure, suggesting a "set point" mechanism in short-term ABP control. 2) The quantitative relationship between dynamic changes in ABP and HR can be described by a second-order AR model with a pure time delay. 3) The ability of the baroreflex to evoke a HR response to transient changes in pressure was reduced during LBNP, which was due primarily to a reduction of the static gain of the baroreflex.

  4. Impact of Assimilation on Heavy Rainfall Simulations Using WRF Model: Sensitivity of Assimilation Results to Background Error Statistics

    NASA Astrophysics Data System (ADS)

    Rakesh, V.; Kantharao, B.

    2017-03-01

    Data assimilation is considered as one of the effective tools for improving forecast skill of mesoscale models. However, for optimum utilization and effective assimilation of observations, many factors need to be taken into account while designing data assimilation methodology. One of the critical components that determines the amount and propagation observation information into the analysis, is model background error statistics (BES). The objective of this study is to quantify how BES in data assimilation impacts on simulation of heavy rainfall events over a southern state in India, Karnataka. Simulations of 40 heavy rainfall events were carried out using Weather Research and Forecasting Model with and without data assimilation. The assimilation experiments were conducted using global and regional BES while the experiment with no assimilation was used as the baseline for assessing the impact of data assimilation. The simulated rainfall is verified against high-resolution rain-gage observations over Karnataka. Statistical evaluation using several accuracy and skill measures shows that data assimilation has improved the heavy rainfall simulation. Our results showed that the experiment using regional BES outperformed the one which used global BES. Critical thermo-dynamic variables conducive for heavy rainfall like convective available potential energy simulated using regional BES is more realistic compared to global BES. It is pointed out that these results have important practical implications in design of forecast platforms while decision-making during extreme weather events

  5. Estimating the Relevance of World Disturbances to Explain Savings, Interference and Long-Term Motor Adaptation Effects

    PubMed Central

    Berniker, Max; Kording, Konrad P.

    2011-01-01

    Recent studies suggest that motor adaptation is the result of multiple, perhaps linear processes each with distinct time scales. While these models are consistent with some motor phenomena, they can neither explain the relatively fast re-adaptation after a long washout period, nor savings on a subsequent day. Here we examined if these effects can be explained if we assume that the CNS stores and retrieves movement parameters based on their possible relevance. We formalize this idea with a model that infers not only the sources of potential motor errors, but also their relevance to the current motor circumstances. In our model adaptation is the process of re-estimating parameters that represent the body and the world. The likelihood of a world parameter being relevant is then based on the mismatch between an observed movement and that predicted when not compensating for the estimated world disturbance. As such, adapting to large motor errors in a laboratory setting should alert subjects that disturbances are being imposed on them, even after motor performance has returned to baseline. Estimates of this external disturbance should be relevant both now and in future laboratory settings. Estimated properties of our bodies on the other hand should always be relevant. Our model demonstrates savings, interference, spontaneous rebound and differences between adaptation to sudden and gradual disturbances. We suggest that many issues concerning savings and interference can be understood when adaptation is conditioned on the relevance of parameters. PMID:21998574

  6. Constraints on Pacific plate kinematics and dynamics with global positioning system measurements

    NASA Technical Reports Server (NTRS)

    Dixon, T. H.; Golombek, M. P.; Thornton, C. L.

    1985-01-01

    A measurement program designed to investigate kinematic and dynamic aspects of plate tectonics in the Pacific region by means of satellite observations is proposed. Accuracy studies are summarized showing that for short baselines (less than 100 km), the measuring accuracy of global positioning system (GPS) receivers can be in the centimeter range. For longer baselines, uncertainty in the orbital ephemerides of the GPS satellites could be a major source of error. Simultaneous observations at widely (about 300 km) separated fiducial stations over the Pacific region, should permit an accuracy in the centimeter range for baselines of up to several thousand kilometers. The optimum performance level is based on the assumption of that fiducial baselines are known a priori to the centimeter range. An example fiducial network for a GPS study of the South Pacific region is described.

  7. Very long baseline IPS observations of the solar wind speed in the fast polar streams

    NASA Technical Reports Server (NTRS)

    Rao, A. Pramesh; Ananthakrishnan, S.; Balasubramanian, V.; Coles, William A.

    1995-01-01

    Observations of intensity scintillation (IPS) with two or more spaced antennas have been widely used to measure the solar wind velocity. Such methods are particularly valuable in regions which spacecraft have not yet penetrated, but they are also very useful in improving the spatial temporal sampling of the solar wind, even in regions where spacecraft data are available. The principle of the measurement is to measure the time delay tau(sub d) between the scintillations observed with an antenna baseline b. The velocity estimate is just V = b/tau(sub d). The error in estimation of the time delay delta tau(sub d) is independent of the baseline length, thus the error in the velocity estimate delta V given by delta(V)/V approximately equals to (delta tau(sub d))/tau(sub d) is inversely proportional to tau(sub d) and hence to b. However the use of a long baseline b has a less obvious advantage; it provides a means for separating fast and slow contributions when both are present in the scattering region. Here we will present recent observations made using the large cylinder antenna at Ooty in the Nilgiri Hills of South India, and one of the 45 m dishes of GMRT near Pune in West-Central India. The baseline of 900 km is, by a considerable margin, the longest ever used for IPS and provides excellent velocity resolution. These results compared with the ULYSSES observations and other IPS measurements made closer to the sun with higher frequency instruments such as EISCAT and the VLBA will provide a precise measure of the velocity profile of the fast north-polar stream.

  8. The Association between Maternal Reproductive Age and Progression of Refractive Error in Urban Students in Beijing

    PubMed Central

    Vasudevan, Balamurali; Jin, Zi Bing; Ciuffreda, Kenneth J.; Jhanji, Vishal; Zhou, Hong Jia; Wang, Ning Li; Liang, Yuan Bo

    2015-01-01

    Purpose To investigate the association between maternal reproductive age and their children’ refractive error progression in Chinese urban students. Methods The Beijing Myopia Progression Study was a three-year cohort investigation. Cycloplegic refraction of these students at both baseline and follow-up vision examinations, as well as non-cycloplegic refraction of their parents at baseline, were performed. Student’s refractive change was defined as the cycloplegic spherical equivalent (SE) of the right eye at the final follow-up minus the cycloplegic SE of the right eye at baseline. Results At the final follow-up, 241 students (62.4%) were reexamined. 226 students (58.5%) with completed refractive data, as well as completed parental reproductive age data, were enrolled. The average paternal and maternal age increased from 29.4 years and 27.5 years in 1993–1994 to 32.6 years and 29.2 years in 2003–2004, respectively. In the multivariate analysis, students who were younger (β = 0.08 diopter/year/year, P<0.001), with more myopic refraction at baseline (β = 0.02 diopter/year/diopter, P = 0.01), and with older maternal reproductive age (β = -0.18 diopter/year/decade, P = 0.01), had more myopic refractive change. After stratifying the parental reproductive age into quartile groups, children with older maternal reproductive age (trend test: P = 0.04) had more myopic refractive change, after adjusting for the children's age, baseline refraction, maternal refraction, and near work time. However, no significant association between myopic refractive change and paternal reproductive age was found. Conclusions In this cohort, children with older maternal reproductive age had more myopic refractive change. This new risk factor for myopia progression may partially explain the faster myopic progression found in the Chinese population in recent decades. PMID:26421841

  9. Safety Performance of Airborne Separation: Preliminary Baseline Testing

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria C.; Hoadley, Sherwood T.; Wing, David J.; Baxley, Brian T.

    2007-01-01

    The Safety Performance of Airborne Separation (SPAS) study is a suite of Monte Carlo simulation experiments designed to analyze and quantify safety behavior of airborne separation. This paper presents results of preliminary baseline testing. The preliminary baseline scenario is designed to be very challenging, consisting of randomized routes in generic high-density airspace in which all aircraft are constrained to the same flight level. Sustained traffic density is varied from approximately 3 to 15 aircraft per 10,000 square miles, approximating up to about 5 times today s traffic density in a typical sector. Research at high traffic densities and at multiple flight levels are planned within the next two years. Basic safety metrics for aircraft separation are collected and analyzed. During the progression of experiments, various errors, uncertainties, delays, and other variables potentially impacting system safety will be incrementally introduced to analyze the effect on safety of the individual factors as well as their interaction and collective effect. In this paper we report the results of the first experiment that addresses the preliminary baseline condition tested over a range of traffic densities. Early results at five times the typical traffic density in today s NAS indicate that, under the assumptions of this study, airborne separation can be safely performed. In addition, we report on initial observations from an exploration of four additional factors tested at a single traffic density: broadcast surveillance signal interference, extent of intent sharing, pilot delay, and wind prediction error.

  10. Error modeling and sensitivity analysis of a parallel robot with SCARA(selective compliance assembly robot arm) motions

    NASA Astrophysics Data System (ADS)

    Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua

    2014-07-01

    Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.

  11. Sustained acceleration on perception of relative position and motion.

    PubMed

    McKinley, R Andrew; Tripp, Lloyd D; Fullerton, Kathy L; Goodyear, Chuck

    2013-03-01

    Air-to-air refueling, formation flying, and projectile countermeasures all rely on a pilot's ability to be aware of his position and motion relative to another object. Eight subjects participated in the study, all members of the sustained acceleration stress panel at Wright-Patterson AFB, OH. The task consisted of the subject performing a two-dimensional join up task between a KC-135 tanker and an F-16. The objective was to guide the nose of the F-16 to the posterior end of the boom extended from the tanker, and hold this position for 2 s. If the F-16 went past the tanker, or misaligned with the tanker, it would be recorded as an error. These tasks were performed during four G(z) acceleration profiles starting from a baseline acceleration of 1.5 G(z). The plateaus were 3, 5, and 7 G(z). The final acceleration exposure was a simulated aerial combat maneuver (SACM). One subject was an outlier and therefore omitted from analysis. The mean capture time and percent error data were recorded and compared separately. There was a significant difference in error percentage change from baseline among the G(z) profiles, but not capture time. Mean errors were approximately 15% higher in the 7 G profile and 10% higher during the SACM. This experiment suggests that the ability to accurately perceive the motion of objects relative to other objects is impeded at acceleration levels of 7 G(z) or higher.

  12. Global and regional kinematics with GPS

    NASA Technical Reports Server (NTRS)

    King, Robert W.

    1994-01-01

    The inherent precision of the doubly differenced phase measurement and the low cost of instrumentation made GPS the space geodetic technique of choice for regional surveys as soon as the constellation reached acceptable geometry in the area of interest: 1985 in western North America, the early 1990's in most of the world. Instrument and site-related errors for horizontal positioning are usually less than 3 mm, so that the dominant source of error is uncertainty in the reference frame defined by the satellites orbits and the tracking stations used to determine them. Prior to about 1992, when the tracking network for most experiments was globally sparse, the number of fiducial sites or the level at which they could be tied to an SLR or VLBI reference frame usually, set the accuracy limit. Recently, with a global network of over 30 stations, the limit is set more often by deficiencies in models for non-gravitational forces acting on the satellites. For regional networks in the northern hemisphere, reference frame errors are currently about 3 parts per billion (ppb) in horizontal position, allowing centimeter-level accuracies over intercontinental distances and less than 1 mm for a 100 km baseline. The accuracy of GPS measurements for monitoring height variations is generally 2-3 times worse than for horizontal motions. As for VLBI, the primary source of error is unmodeled fluctuations in atmospheric water vapor, but both reference frame uncertainties and some instrument errors are more serious for vertical than horizontal measurements. Under good conditions, daily repeatabilities at the level of 10 mm rms were achieved. This paper will summarize the current accuracy of GPS measurements and their implication for the use of SLR to study regional kinematics.

  13. Neural control of blood pressure in women: differences according to age

    PubMed Central

    Peinado, Ana B.; Harvey, Ronee E.; Hart, Emma C.; Charkoudian, Nisha; Curry, Timothy B.; Nicholson, Wayne T.; Wallin, B. Gunnar; Joyner, Michael J.; Barnes, Jill N.

    2017-01-01

    Purpose The blood pressure “error signal” represents the difference between an individual’s mean diastolic blood pressure and the diastolic blood pressure at which 50% of cardiac cycles are associated with a muscle sympathetic nerve activity burst (the “T50”). In this study we evaluated whether T50 and the error signal related to the extent of change in blood pressure during autonomic blockade in young and older women, to study potential differences in sympathetic neural mechanisms regulating blood pressure before and after menopause. Methods We measured muscle sympathetic nerve activity and blood pressure in 12 premenopausal (25±1 years) and 12 postmenopausal women (61±2 years) before and during complete autonomic blockade with trimethaphan camsylate. Results At baseline, young women had a negative error signal (−8±1 versus 2±1 mmHg, p<0.001; respectively) and lower muscle sympathetic nerve activity (15±1 versus 33±3 bursts/min, p<0.001; respectively) than older women. The change in diastolic blood pressure after autonomic blockade was associated with baseline T50 in older women (r=−0.725, p=0.008) but not in young women (r=−0.337, p=0.29). Women with the most negative error signal had the lowest muscle sympathetic nerve activity in both groups (young: r=0.886, p<0.001; older: r=0.870, p<0.001). Conclusions Our results suggest that there are differences in baroreflex control of muscle sympathetic nerve activity between young and older women, using the T50 and error signal analysis. This approach provides further information on autonomic control of blood pressure in women. PMID:28205011

  14. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  15. Computational investigation of flow control by means of tubercles on Darrieus wind turbine blades

    NASA Astrophysics Data System (ADS)

    Sevinç, K.; Özdamar, G.; Şentürk, U.; Özdamar, A.

    2015-09-01

    This work presents the current status of the computational study of the boundary layer control of a vertical axis wind turbine blade by modifying the blade geometry for use in wind energy conversion. The control method is a passive method which comprises the implementation of the tubercle geometry of a humpback whale flipper onto the leading edge of the blades. The baseline design is an H-type, three-bladed Darrieus turbine with a NACA 0015 cross-section. Finite-volume based software ANSYS Fluent was used in the simulations. Using the optimum control parameters for a NACA 634-021 profile given by Johari et al. (2006), turbine blades were modified. Three dimensional, unsteady, turbulent simulations for the blade were conducted to look for a possible improvement on the performance. The flow structure on the blades was investigated and flow phenomena such as separation and stall were examined to understand their impact on the overall performance. For a tip speed ratio of 2.12, good agreement was obtained in the validation of the baseline model with a relative error in time- averaged power coefficient of 1.05%. Modified turbine simulations with a less expensive but less accurate turbulence model yielded a decrease in power coefficient. Results are shown comparatively.

  16. Racial disparities in the health benefits of educational attainment: a study of inflammatory trajectories among African American and white adults.

    PubMed

    Fuller-Rowell, Thomas E; Curtis, David S; Doan, Stacey N; Coe, Christopher L

    2015-01-01

    The current study examined the prospective effects of educational attainment on proinflammatory physiology among African American and white adults. Participants were 1192 African Americans and 1487 whites who participated in Year 5 (mean [standard deviation] age = 30 [3.5] years), and Year 20 (mean [standard deviation] age = 45 [3.5]) of an ongoing longitudinal study. Initial analyses focused on age-related changes in fibrinogen across racial groups, and parallel analyses for C-reactive protein and interleukin-6 assessed at Year 20. Models then estimated the effects of educational attainment on changes in inflammation for African Americans and whites before and after controlling for four blocks of covariates: a) early life adversity, b) health and health behaviors at baseline, c) employment and financial measures at baseline and follow-up, and d) psychosocial stresses in adulthood. African Americans had larger increases in fibrinogen over time than whites (B = 24.93, standard error = 3.24, p < .001), and 37% of this difference was explained after including all covariates. Effects of educational attainment were weaker for African Americans than for whites (B = 10.11, standard error = 3.29, p = .002), and only 8% of this difference was explained by covariates. Analyses for C-reactive protein and interleukin-6 yielded consistent results. The effects of educational attainment on inflammation levels were stronger for white than for African American participants. Why African Americans do not show the same health benefits with educational attainment is an important question for health disparities research.

  17. Sensing the bed-rock movement due to ice unloading from space using InSAR time-series

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Amelung, F.; Dixon, T. H.; Wdowinski, S.

    2014-12-01

    Ice-sheets in the Arctic region are retreating rapidly since late 1990s. Typical ice loss rates are 0.5 - 1 m/yr at the Canadian Arctic Archipelago, ~ 1 m/yr at the Icelandic ice sheets, and several meters per year at the edge of Greenland ice sheet. Such load decreasing causes measurable (several millimeter per year) deformation of the Earth's crust from Synthetic Aperture Radar Interferometry (InSAR). Using small baseline time-series analysis, this signal is retrieved after noises such as orbit error, atmospheric delay and DEM error being removed. We present results from Vatnajokull ice cap, Petermann glacier and Barnes ice cap using ERS, Envisat and TerraSAR-X data. Up to 2 cm/yr relative radar line-of-sight displacement is detected. The pattern of deformation matches the shape of ice sheet very well. The result in Iceland was used to develop a new model for the ice mass balance estimation from 1995 to 2010. Other applications of this kind of technique include validation of ICESat or GRACE based ice sheet model, Earth's rheology (Young's modulus, viscosity and so on). Moreover, we find a narrow (~ 1km) uplift zone close to the periglacial area of Petermann glacier which may due to a special rheology under the ice stream.

  18. Deep Networks Can Resemble Human Feed-forward Vision in Invariant Object Recognition

    PubMed Central

    Kheradpisheh, Saeed Reza; Ghodrati, Masoud; Ganjtabesh, Mohammad; Masquelier, Timothée

    2016-01-01

    Deep convolutional neural networks (DCNNs) have attracted much attention recently, and have shown to be able to recognize thousands of object categories in natural image databases. Their architecture is somewhat similar to that of the human visual system: both use restricted receptive fields, and a hierarchy of layers which progressively extract more and more abstracted features. Yet it is unknown whether DCNNs match human performance at the task of view-invariant object recognition, whether they make similar errors and use similar representations for this task, and whether the answers depend on the magnitude of the viewpoint variations. To investigate these issues, we benchmarked eight state-of-the-art DCNNs, the HMAX model, and a baseline shallow model and compared their results to those of humans with backward masking. Unlike in all previous DCNN studies, we carefully controlled the magnitude of the viewpoint variations to demonstrate that shallow nets can outperform deep nets and humans when variations are weak. When facing larger variations, however, more layers were needed to match human performance and error distributions, and to have representations that are consistent with human behavior. A very deep net with 18 layers even outperformed humans at the highest variation level, using the most human-like representations. PMID:27601096

  19. Predicting health-related quality of life (EQ-5D-5 L) and capability wellbeing (ICECAP-A) in the context of opiate dependence using routine clinical outcome measures: CORE-OM, LDQ and TOP.

    PubMed

    Peak, Jasmine; Goranitis, Ilias; Day, Ed; Copello, Alex; Freemantle, Nick; Frew, Emma

    2018-05-30

    Economic evaluation normally requires information to be collected on outcome improvement using utility values. This is often not collected during the treatment of substance use disorders making cost-effectiveness evaluations of therapy difficult. One potential solution is the use of mapping to generate utility values from clinical measures. This study develops and evaluates mapping algorithms that could be used to predict the EuroQol-5D (EQ-5D-5 L) and the ICEpop CAPability measure for Adults (ICECAP-A) from the three commonly used clinical measures; the CORE-OM, the LDQ and the TOP measures. Models were estimated using pilot trial data of heroin users in opiate substitution treatment. In the trial the EQ-5D-5 L, ICECAP-A, CORE-OM, LDQ and TOP were administered at baseline, three and twelve month time intervals. Mapping was conducted using estimation and validation datasets. The normal estimation dataset, which comprised of baseline sample data, used ordinary least squares (OLS) and tobit regression methods. Data from the baseline and three month time periods were combined to create a pooled estimation dataset. Cluster and mixed regression methods were used to map from this dataset. Predictive accuracy of the models was assessed using the root mean square error (RMSE) and the mean absolute error (MAE). Algorithms were validated using sample data from the follow-up time periods. Mapping algorithms can be used to predict the ICECAP-A and the EQ-5D-5 L in the context of opiate dependence. Although both measures can be predicted, the ICECAP-A was better predicted by the clinical measures. There were no advantages of pooling the data. There were 6 chosen mapping algorithms, which had MAE scores ranging from 0.100 to 0.138 and RMSE scores ranging from 0.134 to 0.178. It is possible to predict the scores of the ICECAP-A and the EQ-5D-5 L with the use of mapping. In the context of opiate dependence, these algorithms provide the possibility of generating utility values from clinical measures and thus enabling economic evaluation of alternative therapy options. ISRCTN22608399 . Date of registration: 27/04/2012. Date of first randomisation: 14/08/2012.

  20. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  1. Dehydration and performance on clinical concussion measures in collegiate wrestlers.

    PubMed

    Weber, Amanda Friedline; Mihalik, Jason P; Register-Mihalik, Johna K; Mays, Sally; Prentice, William E; Guskiewicz, Kevin M

    2013-01-01

    The effects of dehydration induced by wrestling-related weight-cutting tactics on clinical concussion outcomes, such as neurocognitive function, balance performance, and symptoms, have not been adequately studied. To evaluate the effects of dehydration on the outcome of clinical concussion measures in National Collegiate Athletic Association Division I collegiate wrestlers. Repeated-measures design. Clinical research laboratory. Thirty-two Division I healthy collegiate male wrestlers (age = 20.0 ± 1.4 years; height = 175.0 ± 7.5 cm; baseline mass = 79.2 ± 12.6 kg). Participants completed preseason concussion baseline testing in early September. Weight and urine samples were also collected at this time. All participants reported to prewrestling practice and postwrestling practice for the same test battery and protocol in mid-October. They had begun practicing weight-cutting tactics a day before prepractice and postpractice testing. Differences between these measures permitted us to evaluate how dehydration and weight-cutting tactics affected concussion measures. Sport Concussion Assessment Tool 2 (SCAT2), Balance Error Scoring System, Graded Symptom Checklist, and Simple Reaction Time scores. The Simple Reaction Time was measured using the Automated Neuropsychological Assessment Metrics. The SCAT2 measurements were lower at prepractice (P = .002) and postpractice (P < .001) when compared with baseline. The BESS error scores were higher at postpractice when compared with baseline (P = .015). The GSC severity scores were higher at prepractice (P = .011) and postpractice (P < .001) than at baseline and at postpractice when than at prepractice (P = .003). The number of Graded Symptom Checklist symptoms reported was also higher at prepractice (P = .036) and postpractice (P < .001) when compared with baseline, and at postpractice when compared with prepractice (P = .003). Our results suggest that it is important for wrestlers to be evaluated in a euhydrated state to ensure that dehydration is not influencing the outcome of the clinical measures.

  2. Dehydration and Performance on Clinical Concussion Measures in Collegiate Wrestlers

    PubMed Central

    Weber, Amanda Friedline; Mihalik, Jason P.; Register-Mihalik, Johna K.; Mays, Sally; Prentice, William E.; Guskiewicz, Kevin M.

    2013-01-01

    Context: The effects of dehydration induced by wrestling-related weight-cutting tactics on clinical concussion outcomes, such as neurocognitive function, balance performance, and symptoms, have not been adequately studied. Objective: To evaluate the effects of dehydration on the outcome of clinical concussion measures in National Collegiate Athletic Association Division I collegiate wrestlers. Design: Repeated-measures design. Setting: Clinical research laboratory. Patients or Other Participants: Thirty-two Division I healthy collegiate male wrestlers (age = 20.0 ± 1.4 years; height = 175.0 ± 7.5 cm; baseline mass = 79.2 ± 12.6 kg). Intervention(s): Participants completed preseason concussion baseline testing in early September. Weight and urine samples were also collected at this time. All participants reported to prewrestling practice and postwrestling practice for the same test battery and protocol in mid-October. They had begun practicing weight-cutting tactics a day before prepractice and postpractice testing. Differences between these measures permitted us to evaluate how dehydration and weight-cutting tactics affected concussion measures. Main Outcome Measures: Sport Concussion Assessment Tool 2 (SCAT2), Balance Error Scoring System, Graded Symptom Checklist, and Simple Reaction Time scores. The Simple Reaction Time was measured using the Automated Neuropsychological Assessment Metrics. Results: The SCAT2 measurements were lower at prepractice (P = .002) and postpractice (P < .001) when compared with baseline. The BESS error scores were higher at postpractice when compared with baseline (P = .015). The GSC severity scores were higher at prepractice (P = .011) and postpractice (P < .001) than at baseline and at postpractice when than at prepractice (P = .003). The number of Graded Symptom Checklist symptoms reported was also higher at prepractice (P = .036) and postpractice (P < .001) when compared with baseline, and at postpractice when compared with prepractice (P = .003). Conclusions: Our results suggest that it is important for wrestlers to be evaluated in a euhydrated state to ensure that dehydration is not influencing the outcome of the clinical measures. PMID:23672379

  3. Simultaneous treatment of unspecified heteroskedastic model error distribution and mismeasured covariates for restricted moment models.

    PubMed

    Garcia, Tanya P; Ma, Yanyuan

    2017-10-01

    We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.

  4. A function space approach to smoothing with applications to model error estimation for flexible spacecraft control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1981-01-01

    A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.

  5. Global Velocities from VLBI

    NASA Technical Reports Server (NTRS)

    Ma, Chopo; Gordon, David; MacMillan, Daniel

    1999-01-01

    Precise geodetic Very Long Baseline Interferometry (VLBI) measurements have been made since 1979 at about 130 points on all major tectonic plates, including stable interiors and deformation zones. From the data set of about 2900 observing sessions and about 2.3 million observations, useful three-dimensional velocities can be derived for about 80 sites using an incremental least-squares adjustment of terrestrial, celestial, Earth rotation and site/session-specific parameters. The long history and high precision of the data yield formal errors for horizontal velocity as low as 0.1 mm/yr, but the limitation on the interpretation of individual site velocities is the tie to the terrestrial reference frame. Our studies indicate that the effect of converting precise relative VLBI velocities to individual site velocities is an error floor of about 0.4 mm/yr. Most VLBI horizontal velocities in stable plate interiors agree with the NUVEL-1A model, but there are significant departures in Africa and the Pacific. Vertical precision is worse by a factor of 2-3, and there are significant non-zero values that can be interpreted as post-glacial rebound, regional effects, and local disturbances.

  6. Increased Muscle Sympathetic Nerve Activity and Impaired Executive Performance Capacity in Obstructive Sleep Apnea.

    PubMed

    Goya, Thiago T; Silva, Rosyvaldo F; Guerra, Renan S; Lima, Marta F; Barbosa, Eline R F; Cunha, Paulo Jannuzzi; Lobo, Denise M L; Buchpiguel, Carlos A; Busatto-Filho, Geraldo; Negrão, Carlos E; Lorenzi-Filho, Geraldo; Ueno-Pardi, Linda M

    2016-01-01

    To investigate muscle sympathetic nerve activity (MSNA) response and executive performance during mental stress in obstructive sleep apnea (OSA). Individuals with no other comorbidities (age = 52 ± 1 y, body mass index = 29 ± 0.4, kg/m2) were divided into two groups: (1) control (n = 15) and (2) untreated OSA (n = 20) defined by polysomnography. Mini-Mental State of Examination (MMSE) and Inteligence quocient (IQ) were assessed. Heart rate (HR), blood pressure (BP), and MSNA (microneurography) were measured at baseline and during 3 min of the Stroop Color Word Test (SCWT). Sustained attention and inhibitory control were assessed by the number of correct answers and errors during SCWT. Control and OSA groups (apnea-hypopnea index, AHI = 8 ± 1 and 47 ± 1 events/h, respectively) were similar in age, MMSE, and IQ. Baseline HR and BP were similar and increased similarly during SCWT in control and OSA groups. In contrast, baseline MSNA was higher in OSA compared to controls. Moreover, MSNA significantly increased in the third minute of SCWT in OSA, but remained unchanged in controls (P < 0.05). The number of correct answers was lower and the number of errors was significantly higher during the second and third minutes of SCWT in the OSA group (P < 0.05). There was a significant correlation (P < 0.01) between the number of errors in the third minute of SCWT with AHI (r = 0.59), arousal index (r = 0.55), and minimum O2 saturation (r = -0.57). As compared to controls, MSNA is increased in patients with OSA at rest, and further significant MSNA increments and worse executive performance are seen during mental stress. URL: http://www.clinicaltrials.gov, registration number: NCT002289625. © 2016 Associated Professional Sleep Societies, LLC.

  7. The effectiveness of computerized order entry at reducing preventable adverse drug events and medication errors in hospital settings: a systematic review and meta-analysis

    PubMed Central

    2014-01-01

    Background The Health Information Technology for Economic and Clinical Health (HITECH) Act subsidizes implementation by hospitals of electronic health records with computerized provider order entry (CPOE), which may reduce patient injuries caused by medication errors (preventable adverse drug events, pADEs). Effects on pADEs have not been rigorously quantified, and effects on medication errors have been variable. The objectives of this analysis were to assess the effectiveness of CPOE at reducing pADEs in hospital-related settings, and examine reasons for heterogeneous effects on medication errors. Methods Articles were identified using MEDLINE, Cochrane Library, Econlit, web-based databases, and bibliographies of previous systematic reviews (September 2013). Eligible studies compared CPOE with paper-order entry in acute care hospitals, and examined diverse pADEs or medication errors. Studies on children or with limited event-detection methods were excluded. Two investigators extracted data on events and factors potentially associated with effectiveness. We used random effects models to pool data. Results Sixteen studies addressing medication errors met pooling criteria; six also addressed pADEs. Thirteen studies used pre-post designs. Compared with paper-order entry, CPOE was associated with half as many pADEs (pooled risk ratio (RR) = 0.47, 95% CI 0.31 to 0.71) and medication errors (RR = 0.46, 95% CI 0.35 to 0.60). Regarding reasons for heterogeneous effects on medication errors, five intervention factors and two contextual factors were sufficiently reported to support subgroup analyses or meta-regression. Differences between commercial versus homegrown systems, presence and sophistication of clinical decision support, hospital-wide versus limited implementation, and US versus non-US studies were not significant, nor was timing of publication. Higher baseline rates of medication errors predicted greater reductions (P < 0.001). Other context and implementation variables were seldom reported. Conclusions In hospital-related settings, implementing CPOE is associated with a greater than 50% decline in pADEs, although the studies used weak designs. Decreases in medication errors are similar and robust to variations in important aspects of intervention design and context. This suggests that CPOE implementation, as subsidized under the HITECH Act, may benefit public health. More detailed reporting of the context and process of implementation could shed light on factors associated with greater effectiveness. PMID:24894078

  8. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  9. Differential transfer processes in incremental visuomotor adaptation.

    PubMed

    Seidler, Rachel D

    2005-01-01

    Visuomotor adaptive processes were examined by testing transfer of adaptation between similar conditions. Participants made manual aiming movements with a joystick to hit targets on a computer screen, with real-time feedback display of their movement. They adapted to three different rotations of the display in a sequential fashion, with a return to baseline display conditions between rotations. Adaptation was better when participants had prior adaptive experiences. When performance was assessed using direction error (calculated at the time of peak velocity) and initial endpoint error (error before any overt corrective actions), transfer was greater when the final rotation reflected an addition of previously experienced rotations (adaptation order 30 degrees rotation, 15 degrees, 45 degrees) than when it was a subtraction of previously experienced conditions (adaptation order 45 degrees rotation, 15 degrees, 30 degrees). Transfer was equal regardless of adaptation order when performance was assessed with final endpoint error (error following any discrete, corrective actions). These results imply the existence of multiple independent processes in visuomotor adaptation.

  10. A novel variable baseline visibility detection system and its measurement method

    NASA Astrophysics Data System (ADS)

    Li, Meng; Jiang, Li-hui; Xiong, Xing-long; Zhang, Guizhong; Yao, JianQuan

    2017-10-01

    As an important meteorological observation instrument, the visibility meter can ensure the safety of traffic operation. However, due to the optical system contamination as well as sample error, the accuracy and stability of the equipment are difficult to meet the requirement in the low-visibility environment. To settle this matter, a novel measurement equipment was designed based upon multiple baseline, which essentially acts as an atmospheric transmission meter with movable optical receiver, applying weighted least square method to process signal. Theoretical analysis and experiments in real atmosphere environment support this technique.

  11. Minimizing Interpolation Bias and Precision Error in In Vivo μCT-based Measurements of Bone Structure and Dynamics

    PubMed Central

    de Bakker, Chantal M. J.; Altman, Allison R.; Li, Connie; Tribble, Mary Beth; Lott, Carina; Tseng, Wei-Ju; Liu, X. Sherry

    2016-01-01

    In vivo μCT imaging allows for high-resolution, longitudinal evaluation of bone properties. Based on this technology, several recent studies have developed in vivo dynamic bone histomorphometry techniques that utilize registered μCT images to identify regions of bone formation and resorption, allowing for longitudinal assessment of bone remodeling. However, this analysis requires a direct voxel-by-voxel subtraction between image pairs, necessitating rotation of the images into the same coordinate system, which introduces interpolation errors. We developed a novel image transformation scheme, matched-angle transformation (MAT), whereby the interpolation errors are minimized by equally rotating both the follow-up and baseline images instead of the standard of rotating one image while the other remains fixed. This new method greatly reduced interpolation biases caused by the standard transformation. Additionally, our study evaluated the reproducibility and precision of bone remodeling measurements made via in vivo dynamic bone histomorphometry. Although bone remodeling measurements showed moderate baseline noise, precision was adequate to measure physiologically relevant changes in bone remodeling, and measurements had relatively good reproducibility, with intra-class correlation coefficients of 0.75-0.95. This indicates that, when used in conjunction with MAT, in vivo dynamic histomorphometry provides a reliable assessment of bone remodeling. PMID:26786342

  12. Minimizing Interpolation Bias and Precision Error in In Vivo µCT-Based Measurements of Bone Structure and Dynamics.

    PubMed

    de Bakker, Chantal M J; Altman, Allison R; Li, Connie; Tribble, Mary Beth; Lott, Carina; Tseng, Wei-Ju; Liu, X Sherry

    2016-08-01

    In vivo µCT imaging allows for high-resolution, longitudinal evaluation of bone properties. Based on this technology, several recent studies have developed in vivo dynamic bone histomorphometry techniques that utilize registered µCT images to identify regions of bone formation and resorption, allowing for longitudinal assessment of bone remodeling. However, this analysis requires a direct voxel-by-voxel subtraction between image pairs, necessitating rotation of the images into the same coordinate system, which introduces interpolation errors. We developed a novel image transformation scheme, matched-angle transformation (MAT), whereby the interpolation errors are minimized by equally rotating both the follow-up and baseline images instead of the standard of rotating one image while the other remains fixed. This new method greatly reduced interpolation biases caused by the standard transformation. Additionally, our study evaluated the reproducibility and precision of bone remodeling measurements made via in vivo dynamic bone histomorphometry. Although bone remodeling measurements showed moderate baseline noise, precision was adequate to measure physiologically relevant changes in bone remodeling, and measurements had relatively good reproducibility, with intra-class correlation coefficients of 0.75-0.95. This indicates that, when used in conjunction with MAT, in vivo dynamic histomorphometry provides a reliable assessment of bone remodeling.

  13. Impact of shorter wavelengths on optical quality for laws

    NASA Technical Reports Server (NTRS)

    Wissinger, Alan B.; Noll, Robert J.; Tsacoyeanes, James G.; Tausanovitch, Jeanette R.

    1993-01-01

    This study explores parametrically as a function of wavelength the degrading effects of several common optical aberrations (defocus, astigmatism, wavefront tilts, etc.), using the heterodyne mixing efficiency factor as the merit function. A 60 cm diameter aperture beam expander with an expansion ratio of 15:1 and a primary mirror focal ratio of f/2 was designed for the study. An HDOS copyrighted analysis program determined the value of merit function for various optical misalignments. With sensitivities provided by the analysis, preliminary error budget and tolerance allocations were made for potential optical wavefront errors and boresight errors during laser shot transit time. These were compared with the baseline l.5 m CO2 LAWS and the optical fabrication state of the art (SOA) as characterized by the Hubble Space Telescope. Reducing wavelength and changing optical design resulted in optical quality tolerances within the SOA both at 2 and 1 micrometers. However, advanced sensing and control devices would be necessary to maintain on-orbit alignment. Optical tolerance for maintaining boresight stability would have to be tightened by a factor of 1.8 for a 2 micrometers system and by 3.6 for a 1 micrometers system relative to the baseline CO2 LAWS. Available SOA components could be used for operation at 2 micrometers but operation at 1 micrometers does not appear feasible.

  14. Impact of shorter wavelengths on optical quality for laws

    NASA Technical Reports Server (NTRS)

    Wissinger, Alan B.; Noll, Robert J.; Tsacoyeanes, James G.; Tausanovitch, Jeanette R.

    1993-01-01

    This study explores parametrically as a function of wavelength the degrading effects of several common optical aberrations (defocus, astigmatism, wavefronttilts, etc.), using the heterodyne mixing efficiency factor as the merit function. A 60 cm diameter aperture beam expander with an expansion ratio of 15:1 and a primary mirror focal ratio of f/2 was designed for the study. An HDOS copyrighted analysis program determined the value of merit function for various optical misalignments. With sensitivities provided by the analysis, preliminary error budget and tolerance allocations were made for potential optical wavefront errors and boresight errors during laser shot transit time. These were compared with the baseline 1.5 m CO2 laws and the optical fabrication state of the art (SOA) as characterized by the Hubble Space Telescope. Reducing wavelength and changing optical design resulted in optical quality tolerances within the SOA both at 2 and 1 micrometer. However, advanced sensing and control devices would be necessary to be tightened by a factory of 1.8 for a 2 micrometer system and by 3.6 for a 1 micrometer system relative to the baseline CO2 LAWS. Available SOA components could be used for operation at 2 micrometers but operation at 1 micrometer does not appear feasible.

  15. Impact of electronic chemotherapy order forms on prescribing errors at an urban medical center: results from an interrupted time-series analysis.

    PubMed

    Elsaid, K; Truong, T; Monckeberg, M; McCarthy, H; Butera, J; Collins, C

    2013-12-01

    To evaluate the impact of electronic standardized chemotherapy templates on incidence and types of prescribing errors. A quasi-experimental interrupted time series with segmented regression. A 700-bed multidisciplinary tertiary care hospital with an ambulatory cancer center. A multidisciplinary team including oncology physicians, nurses, pharmacists and information technologists. Standardized, regimen-specific, chemotherapy prescribing forms were developed and implemented over a 32-month period. Trend of monthly prevented prescribing errors per 1000 chemotherapy doses during the pre-implementation phase (30 months), immediate change in the error rate from pre-implementation to implementation and trend of errors during the implementation phase. Errors were analyzed according to their types: errors in communication or transcription, errors in dosing calculation and errors in regimen frequency or treatment duration. Relative risk (RR) of errors in the post-implementation phase (28 months) compared with the pre-implementation phase was computed with 95% confidence interval (CI). Baseline monthly error rate was stable with 16.7 prevented errors per 1000 chemotherapy doses. A 30% reduction in prescribing errors was observed with initiating the intervention. With implementation, a negative change in the slope of prescribing errors was observed (coefficient = -0.338; 95% CI: -0.612 to -0.064). The estimated RR of transcription errors was 0.74; 95% CI (0.59-0.92). The estimated RR of dosing calculation errors was 0.06; 95% CI (0.03-0.10). The estimated RR of chemotherapy frequency/duration errors was 0.51; 95% CI (0.42-0.62). Implementing standardized chemotherapy-prescribing templates significantly reduced all types of prescribing errors and improved chemotherapy safety.

  16. The precision of a special purpose analog computer in clinical cardiac output determination.

    PubMed Central

    Sullivan, F J; Mroz, E A; Miller, R E

    1975-01-01

    Three hundred dye-dilution curves taken during our first year of clinical experience with the Waters CO-4 cardiac output computer were analyzed to estimate the errors involved in its use. Provided that calibration is accurate and 5.0 mg of dye are injected for each curve, then the percentage standard deviation of measurement using this computer is about 8.7%. Included in this are the errors inherent in the computer, errors due to baseline drift, errors in the injection of dye and acutal variation of cardiac output over a series of successive determinations. The size of this error is comparable to that involved in manual calculation. The mean value of five successive curves will be within 10% of the real value in 99 cases out of 100. Advances in methodology and equipment are discussed which make calibration simpler and more accurate, and which should also improve the quality of computer determination. A list of suggestions is given to minimize the errors involved in the clinical use of this equipment. Images Fig. 4. PMID:1089394

  17. Visual error augmentation enhances learning in three dimensions.

    PubMed

    Sharp, Ian; Huang, Felix; Patton, James

    2011-09-02

    Because recent preliminary evidence points to the use of Error augmentation (EA) for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed). Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation) when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.

  18. Model error estimation for distributed systems described by elliptic equations

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1983-01-01

    A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.

  19. Response cost, reinforcement, and children's Porteus Maze qualitative performance.

    PubMed

    Neenan, D M; Routh, D K

    1986-09-01

    Sixty fourth-grade children were given two different series of the Porteus Maze Test. The first series was given as a baseline, and the second series was administered under one of four different experimental conditions: control, response cost, positive reinforcement, or negative verbal feedback. Response cost and positive reinforcement, but not negative verbal feedback, led to significant decreases in the number of all types of qualitative errors in relation to the control group. The reduction of nontargeted as well as targeted errors provides evidence for the generalized effects of response cost and positive reinforcement.

  20. Use B-spline interpolation fitting baseline for low concentration 2, 6-di-tertbutyl p-cresol determination in jet fuels by differential pulse voltammetry

    NASA Astrophysics Data System (ADS)

    Wen, D. S.; Wen, H.; Shi, Y. G.; Su, B.; Li, Z. C.; Fan, G. Z.

    2018-01-01

    The B-spline interpolation fitting baseline in electrochemical analysis by differential pulse voltammetry was established for determining the lower concentration 2,6-di-tert-butyl p-cresol(BHT) in Jet Fuel that was less than 5.0 mg/L in the condition of the presence of the 6-tert-butyl-2,4-xylenol.The experimental results has shown that the relative errors are less than 2.22%, the sum of standard deviations less than 0.134mg/L, the correlation coefficient more than 0.9851. If the 2,6-ditert-butyl p-cresol concentration is higher than 5.0mg/L, linear fitting baseline method would be more applicable and simpler.

  1. Benefits of pulmonary rehabilitation in idiopathic pulmonary fibrosis.

    PubMed

    Swigris, Jeffrey J; Fairclough, Diane L; Morrison, Marianne; Make, Barry; Kozora, Elizabeth; Brown, Kevin K; Wamboldt, Frederick S

    2011-06-01

    Information on the benefits of pulmonary rehabilitation (PR) in patients with idiopathic pulmonary fibrosis (IPF) is growing, but PR's effects on certain important outcomes is lacking. We conducted a pilot study of PR in IPF and analyzed changes in functional capacity, fatigue, anxiety, depression, sleep, and health status from baseline to after completion of a standard, 6-week PR program. Six-min walk distance improved a mean ± standard error 202 ± 135 feet (P = .01) from baseline. Fatigue Severity Scale score also improved significantly, declining an average 1.5 ± 0.5 points from baseline. There were trends toward improvement in anxiety, depression, and health status. PR improves functional capacity and fatigue in patients with IPF. (Clinical Trials.gov registration NCT00692796.)

  2. Avoiding and identifying errors in health technology assessment models: qualitative study and methodological review.

    PubMed

    Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A

    2010-05-01

    Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for identifying errors; and barriers and facilitators. There was no common language in the discussion of modelling errors and there was inconsistency in the perceived boundaries of what constitutes an error. Asked about the definition of model error, there was a tendency for interviewees to exclude matters of judgement from being errors and focus on 'slips' and 'lapses', but discussion of slips and lapses comprised less than 20% of the discussion on types of errors. Interviewees devoted 70% of the discussion to softer elements of the process of defining the decision question and conceptual modelling, mostly the realms of judgement, skills, experience and training. The original focus concerned model errors, but it may be more useful to refer to modelling risks. Several interviewees discussed concepts of validation and verification, with notable consistency in interpretation: verification meaning the process of ensuring that the computer model correctly implemented the intended model, whereas validation means the process of ensuring that a model is fit for purpose. Methodological literature on verification and validation of models makes reference to the Hermeneutic philosophical position, highlighting that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Interviewees demonstrated examples of all major error types identified in the literature: errors in the description of the decision problem, in model structure, in use of evidence, in implementation of the model, in operation of the model, and in presentation and understanding of results. The HTA error classifications were compared against existing classifications of model errors in the literature. A range of techniques and processes are currently used to avoid errors in HTA models: engaging with clinical experts, clients and decision-makers to ensure mutual understanding, producing written documentation of the proposed model, explicit conceptual modelling, stepping through skeleton models with experts, ensuring transparency in reporting, adopting standard housekeeping techniques, and ensuring that those parties involved in the model development process have sufficient and relevant training. Clarity and mutual understanding were identified as key issues. However, their current implementation is not framed within an overall strategy for structuring complex problems. Some of the questioning may have biased interviewees responses but as all interviewees were represented in the analysis no rebalancing of the report was deemed necessary. A potential weakness of the literature review was its focus on spreadsheet and program development rather than specifically on model development. It should also be noted that the identified literature concerning programming errors was very narrow despite broad searches being undertaken. Published definitions of overall model validity comprising conceptual model validation, verification of the computer model, and operational validity of the use of the model in addressing the real-world problem are consistent with the views expressed by the HTA community and are therefore recommended as the basis for further discussions of model credibility. Such discussions should focus on risks, including errors of implementation, errors in matters of judgement and violations. Discussions of modelling risks should reflect the potentially complex network of cognitive breakdowns that lead to errors in models and existing research on the cognitive basis of human error should be included in an examination of modelling errors. There is a need to develop a better understanding of the skills requirements for the development, operation and use of HTA models. Interaction between modeller and client in developing mutual understanding of a model establishes that model's significance and its warranty. This highlights that model credibility is the central concern of decision-makers using models so it is crucial that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Recommendations for future research would be studies of verification and validation; the model development process; and identification of modifications to the modelling process with the aim of preventing the occurrence of errors and improving the identification of errors in models.

  3. Power Measurement Errors on a Utility Aircraft

    NASA Technical Reports Server (NTRS)

    Bousman, William G.

    2002-01-01

    Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.

  4. Maps of Jovian radio emission

    NASA Technical Reports Server (NTRS)

    Depater, I.

    1977-01-01

    Observations were made of Jupiter with the Westerbork telescope at all three frequencies available: 610 MHz, 1415 MHz, and 4995 MHz. The raw measurements were corrected for position errors, atmospheric extinction, Faraday rotation, clock, frequency, and baseline errors, and errors due to a shadowing effect. The data was then converted into brightness distribution of the sky by Fourier transformation. Maps of both thermal and nonthermal radiation were developed. Results indicate that the thermal disk of Jupiter measured at a wavelength of 6 cm has a temperature of 236 + or - 15 K. The radiation belts have an overall structure governed by the trapping of electrons in the dipolar field of the planet with significant beaming of the synchrotron radiation into the plane of the magnetic equator.

  5. Common Scientific and Statistical Errors in Obesity Research

    PubMed Central

    George, Brandon J.; Beasley, T. Mark; Brown, Andrew W.; Dawson, John; Dimova, Rositsa; Divers, Jasmin; Goldsby, TaShauna U.; Heo, Moonseong; Kaiser, Kathryn A.; Keith, Scott; Kim, Mimi Y.; Li, Peng; Mehta, Tapan; Oakes, J. Michael; Skinner, Asheley; Stuart, Elizabeth; Allison, David B.

    2015-01-01

    We identify 10 common errors and problems in the statistical analysis, design, interpretation, and reporting of obesity research and discuss how they can be avoided. The 10 topics are: 1) misinterpretation of statistical significance, 2) inappropriate testing against baseline values, 3) excessive and undisclosed multiple testing and “p-value hacking,” 4) mishandling of clustering in cluster randomized trials, 5) misconceptions about nonparametric tests, 6) mishandling of missing data, 7) miscalculation of effect sizes, 8) ignoring regression to the mean, 9) ignoring confirmation bias, and 10) insufficient statistical reporting. We hope that discussion of these errors can improve the quality of obesity research by helping researchers to implement proper statistical practice and to know when to seek the help of a statistician. PMID:27028280

  6. Quantifying the test-retest reliability of cerebral blood flow measurements in a clinical model of on-going post-surgical pain: A study using pseudo-continuous arterial spin labelling.

    PubMed

    Hodkinson, Duncan J; Krause, Kristina; Khawaja, Nadine; Renton, Tara F; Huggins, John P; Vennart, William; Thacker, Michael A; Mehta, Mitul A; Zelaya, Fernando O; Williams, Steven C R; Howard, Matthew A

    2013-01-01

    Arterial spin labelling (ASL) is increasingly being applied to study the cerebral response to pain in both experimental human models and patients with persistent pain. Despite its advantages, scanning time and reliability remain important issues in the clinical applicability of ASL. Here we present the test-retest analysis of concurrent pseudo-continuous ASL (pCASL) and visual analogue scale (VAS), in a clinical model of on-going pain following third molar extraction (TME). Using ICC performance measures, we were able to quantify the reliability of the post-surgical pain state and ΔCBF (change in CBF), both at the group and individual case level. Within-subject, the inter- and intra-session reliability of the post-surgical pain state was ranked good-to-excellent (ICC > 0.6) across both pCASL and VAS modalities. The parameter ΔCBF (change in CBF between pre- and post-surgical states) performed reliably (ICC > 0.4), provided that a single baseline condition (or the mean of more than one baseline) was used for subtraction. Between-subjects, the pCASL measurements in the post-surgical pain state and ΔCBF were both characterised as reliable (ICC > 0.4). However, the subjective VAS pain ratings demonstrated a significant contribution of pain state variability, which suggests diminished utility for interindividual comparisons. These analyses indicate that the pCASL imaging technique has considerable potential for the comparison of within- and between-subjects differences associated with pain-induced state changes and baseline differences in regional CBF. They also suggest that differences in baseline perfusion and functional lateralisation characteristics may play an important role in the overall reliability of the estimated changes in CBF. Repeated measures designs have the important advantage that they provide good reliability for comparing condition effects because all sources of variability between subjects are excluded from the experimental error. The ability to elicit reliable neural correlates of on-going pain using quantitative perfusion imaging may help support the conclusions derived from subjective self-report.

  7. Uniformly Processed Strong Motion Database for Himalaya and Northeast Region of India

    NASA Astrophysics Data System (ADS)

    Gupta, I. D.

    2018-03-01

    This paper presents the first uniformly processed comprehensive database on strong motion acceleration records for the extensive regions of western Himalaya, northeast India, and the alluvial plains juxtaposing the Himalaya. This includes 146 three components of old analog records corrected for the instrument response and baseline distortions and 471 three components of recent digital records corrected for baseline errors. The paper first provides a background of the evolution of strong motion data in India and the seismotectonics of the areas of recording, then describes the details of the recording stations and the contributing earthquakes, which is finally followed by the methodology used to obtain baseline corrected data in a uniform and consistent manner. Two different schemes in common use for baseline correction are based on the application of the Ormsby filter without zero pads (Trifunac 1971) and that on the Butterworth filter with zero pads at the start as well as at the end (Converse and Brady 1992). To integrate the advantages of both the schemes, Ormsby filter with zero pads at the start only is used in the present study. A large number of typical example results are presented to illustrate that the methodology adopted is able to provide realistic velocity and displacement records with much smaller number of zero pads. The present strong motion database of corrected acceleration records will be useful for analyzing the ground motion characteristics of engineering importance, developing prediction equations for various strong motion parameters, and calibrating the seismological source model approach for ground motion simulation for seismically active and risk prone areas of India.

  8. What errors do peer reviewers detect, and does training improve their ability to detect them?

    PubMed

    Schroter, Sara; Black, Nick; Evans, Stephen; Godlee, Fiona; Osorio, Lyda; Smith, Richard

    2008-10-01

    To analyse data from a trial and report the frequencies with which major and minor errors are detected at a general medical journal, the types of errors missed and the impact of training on error detection. 607 peer reviewers at the BMJ were randomized to two intervention groups receiving different types of training (face-to-face training or a self-taught package) and a control group. Each reviewer was sent the same three test papers over the study period, each of which had nine major and five minor methodological errors inserted. BMJ peer reviewers. The quality of review, assessed using a validated instrument, and the number and type of errors detected before and after training. The number of major errors detected varied over the three papers. The interventions had small effects. At baseline (Paper 1) reviewers found an average of 2.58 of the nine major errors, with no notable difference between the groups. The mean number of errors reported was similar for the second and third papers, 2.71 and 3.0, respectively. Biased randomization was the error detected most frequently in all three papers, with over 60% of reviewers rejecting the papers identifying this error. Reviewers who did not reject the papers found fewer errors and the proportion finding biased randomization was less than 40% for each paper. Editors should not assume that reviewers will detect most major errors, particularly those concerned with the context of study. Short training packages have only a slight impact on improving error detection.

  9. Backus Effect on a Perpendicular Errors in Harmonic Models of Real vs. Synthetic Data

    NASA Technical Reports Server (NTRS)

    Voorhies, C. V.; Santana, J.; Sabaka, T.

    1999-01-01

    Measurements of geomagnetic scalar intensity on a thin spherical shell alone are not enough to separate internal from external source fields; moreover, such scalar data are not enough for accurate modeling of the vector field from internal sources because of unmodeled fields and small data errors. Spherical harmonic models of the geomagnetic potential fitted to scalar data alone therefore suffer from well-understood Backus effect and perpendicular errors. Curiously, errors in some models of simulated 'data' are very much less than those in models of real data. We analyze select Magsat vector and scalar measurements separately to illustrate Backus effect and perpendicular errors in models of real scalar data. By using a model to synthesize 'data' at the observation points, and by adding various types of 'noise', we illustrate such errors in models of synthetic 'data'. Perpendicular errors prove quite sensitive to the maximum degree in the spherical harmonic expansion of the potential field model fitted to the scalar data. Small errors in models of synthetic 'data' are found to be an artifact of matched truncation levels. For example, consider scalar synthetic 'data' computed from a degree 14 model. A degree 14 model fitted to such synthetic 'data' yields negligible error, but amplifies 4 nT (rmss) added noise into a 60 nT error (rmss); however, a degree 12 model fitted to the noisy 'data' suffers a 492 nT error (rmms through degree 12). Geomagnetic measurements remain unaware of model truncation, so the small errors indicated by some simulations cannot be realized in practice. Errors in models fitted to scalar data alone approach 1000 nT (rmss) and several thousand nT (maximum).

  10. Quality assessment of gasoline using comprehensive two-dimensional gas chromatography combined with unfolded partial least squares: A reliable approach for the detection of gasoline adulteration.

    PubMed

    Parastar, Hadi; Mostafapour, Sara; Azimi, Gholamhasan

    2016-01-01

    Comprehensive two-dimensional gas chromatography and flame ionization detection combined with unfolded-partial least squares is proposed as a simple, fast and reliable method to assess the quality of gasoline and to detect its potential adulterants. The data for the calibration set are first baseline corrected using a two-dimensional asymmetric least squares algorithm. The number of significant partial least squares components to build the model is determined using the minimum value of root-mean square error of leave-one out cross validation, which was 4. In this regard, blends of gasoline with kerosene, white spirit and paint thinner as frequently used adulterants are used to make calibration samples. Appropriate statistical parameters of regression coefficient of 0.996-0.998, root-mean square error of prediction of 0.005-0.010 and relative error of prediction of 1.54-3.82% for the calibration set show the reliability of the developed method. In addition, the developed method is externally validated with three samples in validation set (with a relative error of prediction below 10.0%). Finally, to test the applicability of the proposed strategy for the analysis of real samples, five real gasoline samples collected from gas stations are used for this purpose and the gasoline proportions were in range of 70-85%. Also, the relative standard deviations were below 8.5% for different samples in the prediction set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. A refined index of model performance: a rejoinder

    USGS Publications Warehouse

    Legates, David R.; McCabe, Gregory J.

    2013-01-01

    Willmott et al. [Willmott CJ, Robeson SM, Matsuura K. 2012. A refined index of model performance. International Journal of Climatology, forthcoming. DOI:10.1002/joc.2419.] recently suggest a refined index of model performance (dr) that they purport to be superior to other methods. Their refined index ranges from − 1.0 to 1.0 to resemble a correlation coefficient, but it is merely a linear rescaling of our modified coefficient of efficiency (E1) over the positive portion of the domain of dr. We disagree with Willmott et al. (2012) that dr provides a better interpretation; rather, E1 is more easily interpreted such that a value of E1 = 1.0 indicates a perfect model (no errors) while E1 = 0.0 indicates a model that is no better than the baseline comparison (usually the observed mean). Negative values of E1 (and, for that matter, dr < 0.5) indicate a substantially flawed model as they simply describe a ‘level of inefficacy’ for a model that is worse than the comparison baseline. Moreover, while dr is piecewise continuous, it is not continuous through the second and higher derivatives. We explain why the coefficient of efficiency (E or E2) and its modified form (E1) are superior and preferable to many other statistics, including dr, because of intuitive interpretability and because these indices have a fundamental meaning at zero.We also expand on the discussion begun by Garrick et al. [Garrick M, Cunnane C, Nash JE. 1978. A criterion of efficiency for rainfall-runoff models. Journal of Hydrology 36: 375-381.] and continued by Legates and McCabe [Legates DR, McCabe GJ. 1999. Evaluating the use of “goodness-of-fit” measures in hydrologic and hydroclimatic model validation. Water Resources Research 35(1): 233-241.] and Schaefli and Gupta [Schaefli B, Gupta HV. 2007. Do Nash values have value? Hydrological Processes 21: 2075-2080. DOI: 10.1002/hyp.6825.]. This important discussion focuses on the appropriate baseline comparison to use, and why the observed mean often may be an inadequate choice for model evaluation and development. 

  12. Digital elevation model generation from satellite interferometric synthetic aperture radar: Chapter 5

    USGS Publications Warehouse

    Lu, Zhong; Dzurisin, Daniel; Jung, Hyung-Sup; Zhang, Lei; Lee, Wonjin; Lee, Chang-Wook

    2012-01-01

    An accurate digital elevation model (DEM) is a critical data set for characterizing the natural landscape, monitoring natural hazards, and georeferencing satellite imagery. The ideal interferometric synthetic aperture radar (InSAR) configuration for DEM production is a single-pass two-antenna system. Repeat-pass single-antenna satellite InSAR imagery, however, also can be used to produce useful DEMs. DEM generation from InSAR is advantageous in remote areas where the photogrammetric approach to DEM generation is hindered by inclement weather conditions. There are many sources of errors in DEM generation from repeat-pass InSAR imagery, for example, inaccurate determination of the InSAR baseline, atmospheric delay anomalies, and possible surface deformation because of tectonic, volcanic, or other sources during the time interval spanned by the images. This chapter presents practical solutions to identify and remove various artifacts in repeat-pass satellite InSAR images to generate a high-quality DEM.

  13. A Fast Surrogate-facilitated Data-driven Bayesian Approach to Uncertainty Quantification of a Regional Groundwater Flow Model with Structural Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.

    2016-12-01

    Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.

  14. Refractive errors in children and adolescents in Bucaramanga (Colombia).

    PubMed

    Galvis, Virgilio; Tello, Alejandro; Otero, Johanna; Serrano, Andrés A; Gómez, Luz María; Castellanos, Yuly

    2017-01-01

    The aim of this study was to establish the frequency of refractive errors in children and adolescents aged between 8 and 17 years old, living in the metropolitan area of Bucaramanga (Colombia). This study was a secondary analysis of two descriptive cross-sectional studies that applied sociodemographic surveys and assessed visual acuity and refraction. Ametropias were classified as myopic errors, hyperopic errors, and mixed astigmatism. Eyes were considered emmetropic if none of these classifications were made. The data were collated using free software and analyzed with STATA/IC 11.2. One thousand two hundred twenty-eight individuals were included in this study. Girls showed a higher rate of ametropia than boys. Hyperopic refractive errors were present in 23.1% of the subjects, and myopic errors in 11.2%. Only 0.2% of the eyes had high myopia (≤-6.00 D). Mixed astigmatism and anisometropia were uncommon, and myopia frequency increased with age. There were statistically significant steeper keratometric readings in myopic compared to hyperopic eyes. The frequency of refractive errors that we found of 36.7% is moderate compared to the global data. The rates and parameters statistically differed by sex and age groups. Our findings are useful for establishing refractive error rate benchmarks in low-middle-income countries and as a baseline for following their variation by sociodemographic factors.

  15. Propulsion Controls Modeling for a Small Turbofan Engine

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Csank, Jeffrey T.; Chicatelli, Amy; Franco, Kevin

    2017-01-01

    A nonlinear dynamic model and propulsion controller are developed for a small-scale turbofan engine. The small-scale turbofan engine is based on the Price Induction company's DGEN 380, one of the few turbofan engines targeted for the personal light jet category. Comparisons of the nonlinear dynamic turbofan engine model to actual DGEN 380 engine test data and a Price Induction simulation are provided. During engine transients, the nonlinear model typically agrees within 10 percent error, even though the nonlinear model was developed from limited available engine data. A gain scheduled proportional integral low speed shaft controller with limiter safety logic is created to replicate the baseline DGEN 380 controller. The new controller provides desired gain and phase margins and is verified to meet Federal Aviation Administration transient propulsion system requirements. In understanding benefits, there is a need to move beyond simulation for the demonstration of advanced control architectures and technologies by using real-time systems and hardware. The small-scale DGEN 380 provides a cost effective means to accomplish advanced controls testing on a relevant turbofan engine platform.

  16. A long baseline global stereo matching based upon short baseline estimation

    NASA Astrophysics Data System (ADS)

    Li, Jing; Zhao, Hong; Li, Zigang; Gu, Feifei; Zhao, Zixin; Ma, Yueyang; Fang, Meiqi

    2018-05-01

    In global stereo vision, balancing the matching efficiency and computing accuracy seems to be impossible because they contradict each other. In the case of a long baseline, this contradiction becomes more prominent. In order to solve this difficult problem, this paper proposes a novel idea to improve both the efficiency and accuracy in global stereo matching for a long baseline. In this way, the reference images located between the long baseline image pairs are firstly chosen to form the new image pairs with short baselines. The relationship between the disparities of pixels in the image pairs with different baselines is revealed by considering the quantized error so that the disparity search range under the long baseline can be reduced by guidance of the short baseline to gain matching efficiency. Then, the novel idea is integrated into the graph cuts (GCs) to form a multi-step GC algorithm based on the short baseline estimation, by which the disparity map under the long baseline can be calculated iteratively on the basis of the previous matching. Furthermore, the image information from the pixels that are non-occluded under the short baseline but are occluded for the long baseline can be employed to improve the matching accuracy. Although the time complexity of the proposed method depends on the locations of the chosen reference images, it is usually much lower for a long baseline stereo matching than when using the traditional GC algorithm. Finally, the validity of the proposed method is examined by experiments based on benchmark datasets. The results show that the proposed method is superior to the traditional GC method in terms of efficiency and accuracy, and thus it is suitable for long baseline stereo matching.

  17. Visual outcomes after spectacles treatment in children with bilateral high refractive amblyopia.

    PubMed

    Lin, Pei-Wen; Chang, Hsueh-Wen; Lai, Ing-Chou; Teng, Mei-Ching

    2016-11-01

    The aim was to investigate the visual outcomes of treatment with spectacles for bilateral high refractive amblyopia in children three to eight years of age. Children with previously untreated bilateral refractive amblyopia were enrolled. Bilateral high refractive amblyopia was defined as visual acuity (VA) being worse than 6/9 in both eyes in the presence of 5.00 D or more of hyperopia, 5.00 D or more of myopia and 2.00 D or more of astigmatism. Full myopic and astigmatic refractive errors were corrected, and the hyperopic refractive errors were corrected within 1.00 D of the full correction. All children received visual assessments at four-weekly intervals. VA, Worth four-dot test and Randot preschool stereotest were assessed at baseline and every four weeks for two years. Twenty-eight children with previously untreated bilateral high refractive amblyopia were enrolled. The mean VA at baseline was 0.39 ± 0.24 logMAR and it significantly improved to 0.21, 0.14, 0.11, 0.05 and 0.0 logMAR at four, eight, 12, 24 weeks and 18 months, respectively (all p = 0.001). The mean stereoacuity (SA) was 1,143 ± 617 arcsec at baseline and it significantly improved to 701, 532, 429, 211 and 98 arcsec at four, eight, 12, 24 weeks and 18 months, respectively (all p = 0.001). The time interval for VA achieving 6/6 was significantly shorter in the eyes of low spherical equivalent (SE) (-2.00 D < SE < +2.00 D) than in those of high SE (SE > +2.00 D) (3.33 ± 2.75 months versus 8.11 ± 4.56 months, p = 0.0005). All subjects had normal fusion on Worth four-dot test at baseline and all follow-up visits. Refractive correction with good spectacles compliance improves VA and SA in young children with bilateral high refractive amblyopia. Patients with greater amounts of refractive error will achieve resolution of amblyopia with a longer time. © 2016 Optometry Australia.

  18. Image navigation and registration performance assessment tool set for the GOES-R Advanced Baseline Imager and Geostationary Lightning Mapper

    NASA Astrophysics Data System (ADS)

    De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.

    2016-05-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99. 73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.

  19. Small baseline subsets approach of DInSAR for investigating land surface deformation along the high-speed railway

    NASA Astrophysics Data System (ADS)

    Rao, Xiong; Tang, Yunwei

    2014-11-01

    Land surface deformation evidently exists in a newly-built high-speed railway in the southeast of China. In this study, we utilize the Small BAseline Subsets (SBAS)-Differential Synthetic Aperture Radar Interferometry (DInSAR) technique to detect land surface deformation along the railway. In this work, 40 Cosmo-SkyMed satellite images were selected to analyze the spatial distribution and velocity of the deformation in study area. 88 pairs of image with high coherence were firstly chosen with an appropriate threshold. These images were used to deduce the deformation velocity map and the variation in time series. This result can provide information for orbit correctness and ground control point (GCP) selection in the following steps. Then, more pairs of image were selected to tighten the constraint in time dimension, and to improve the final result by decreasing the phase unwrapping error. 171 combinations of SAR pairs were ultimately selected. Reliable GCPs were re-selected according to the previously derived deformation velocity map. Orbital residuals error was rectified using these GCPs, and nonlinear deformation components were estimated. Therefore, a more accurate surface deformation velocity map was produced. Precise geodetic leveling work was implemented in the meantime. We compared the leveling result with the geocoding SBAS product using the nearest neighbour method. The mean error and standard deviation of the error were respectively 0.82 mm and 4.17 mm. This result demonstrates the effectiveness of DInSAR technique for monitoring land surface deformation, which can serve as a reliable decision for supporting highspeed railway project design, construction, operation and maintenance.

  20. Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for the GOES-R Advanced Baseline Imager and Geostationary Lightning Mapper

    NASA Technical Reports Server (NTRS)

    DeLuccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.

    2016-01-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.

  1. The Pearson-Readhead Survey of Compact Extragalactic Radio Sources from Space. I. The Images

    NASA Astrophysics Data System (ADS)

    Lister, M. L.; Tingay, S. J.; Murphy, D. W.; Piner, B. G.; Jones, D. L.; Preston, R. A.

    2001-06-01

    We present images from a space-VLBI survey using the facilities of the VLBI Space Observatory Programme (VSOP), drawing our sample from the well-studied Pearson-Readhead survey of extragalactic radio sources. Our survey has taken advantage of long space-VLBI baselines and large arrays of ground antennas, such as the Very Long Baseline Array and European VLBI Network, to obtain high-resolution images of 27 active galactic nuclei and to measure the core brightness temperatures of these sources more accurately than is possible from the ground. A detailed analysis of the source properties is given in accompanying papers. We have also performed an extensive series of simulations to investigate the errors in VSOP images caused by the relatively large holes in the (u,v)-plane when sources are observed near the orbit normal direction. We find that while the nominal dynamic range (defined as the ratio of map peak to off-source error) often exceeds 1000:1, the true dynamic range (map peak to on-source error) is only about 30:1 for relatively complex core-jet sources. For sources dominated by a strong point source, this value rises to approximately 100:1. We find the true dynamic range to be a relatively weak function of the difference in position angle (P.A.) between the jet P.A. and u-v coverage major axis P.A. For regions with low signal-to-noise ratios, typically located down the jet away from the core, large errors can occur, causing spurious features in VSOP images that should be interpreted with caution.

  2. Image Navigation and Registration Performance Assessment Tool Set for the GOES-R Advanced Baseline Imager and Geostationary Lightning Mapper

    NASA Technical Reports Server (NTRS)

    De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.

    2016-01-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24-hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24-hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.

  3. Mitigation of multipath effect in GNSS short baseline positioning by the multipath hemispherical map

    NASA Astrophysics Data System (ADS)

    Dong, D.; Wang, M.; Chen, W.; Zeng, Z.; Song, L.; Zhang, Q.; Cai, M.; Cheng, Y.; Lv, J.

    2016-03-01

    Multipath is one major error source in high-accuracy GNSS positioning. Various hardware and software approaches are developed to mitigate the multipath effect. Among them the MHM (multipath hemispherical map) and sidereal filtering (SF)/advanced SF (ASF) approaches utilize the spatiotemporal repeatability of multipath effect under static environment, hence they can be implemented to generate multipath correction model for real-time GNSS data processing. We focus on the spatial-temporal repeatability-based MHM and SF/ASF approaches and compare their performances for multipath reduction. Comparisons indicate that both MHM and ASF approaches perform well with residual variance reduction (50 %) for short span (next 5 days) and maintains roughly 45 % reduction level for longer span (next 6-25 days). The ASF model is more suitable for high frequency multipath reduction, such as high-rate GNSS applications. The MHM model is easier to implement for real-time multipath mitigation when the overall multipath regime is medium to low frequency.

  4. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  5. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) can not fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First,more » the modeling error PDF by the tradional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. Furthermore, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  6. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  7. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE PAGES

    Zhou, Ping; Wang, Chenyu; Li, Mingjie; ...

    2018-01-31

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  8. Counteracting structural errors in ensemble forecast of influenza outbreaks.

    PubMed

    Pei, Sen; Shaman, Jeffrey

    2017-10-13

    For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.

  9. [Prediction of heat-related mortality impacts under climate change scenarios in Shanghai].

    PubMed

    Guo, Ya-fei; Li, Tian-tian; Cheng, Yan-li; Ge, Tan-xi; Chen, Chen; Liu, Fan

    2012-11-01

    To project the future impacts of climate change on heat-related mortality in shanghai. The statistical downscaling techniques were applied to simulate the daily mean temperatures of Shanghai in the middle and farther future under the changing climate. Based on the published exposure-reaction relationship of temperature and mortality in Shanghai, we projected the heat-related mortality in the middle and farther future under the circumstance of high speed increase of carbon e mission (A2) and low speed increase of carbon emission (B2). The data of 1961 to 1990 was used to establish the model, and the data of 1991 - 2001 was used to testify the model, and then the daily mean temperature from 2030 to 2059 and from 2070 to 2099 were simulated and the heat-related mortality was projected. The data resources were from U.S. National Climatic Data Center (NCDC), U.S. National Centers for Environmental Prediction Reanalysis Data in SDSM Website and UK Hadley Centre Coupled Model Data in SDSM Website. The explained variance and the standard error of the established model was separately 98.1% and 1.24°C. The R(2) value of the simulated trend line equaled to 0.978 in Shanghai, as testified by the model. Therefore, the temperature prediction model simulated daily mean temperatures well. Under A2 scenario, the daily mean temperature in 2030 - 2059 and 2070 - 2099 were projected to be 17.9°C and 20.4°C, respectively, increasing by 1.1°C and 3.6°C when compared to baseline period (16.8°C). Under B2 scenario, the daily mean temperature in 2030 - 2059 and 2070 - 2099 were projected to be 17.8°C and 19.1°C, respectively, increasing by 1.0°C and 2.3°C when compared to baseline period (16.8°C). Under A2 scenario, annual average heat-related mortality were projected to be 516 cases and 1191 cases in 2030 - 2059 and 2070 - 2099, respectively, increasing 53.6% and 254.5% when compared with baseline period (336 cases). Under B2 scenario, annual average heat-related mortality were projected to be 498 cases and 832 cases in 2030 - 2059 and 2070 - 2099, respectively, increasing 48.2% and 147.6% when compared with baseline period (336 cases). Under the changing climate, heat-related mortality is projected to increase in the future;and the increase will be more obvious in year 2070 - 2099 than in year 2030 - 2059.

  10. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johanna H Oxstrand; Katya L Le Blanc

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts wemore » are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups, sharing procedures between fellow coworkers, the use of multiple procedures at once, etc. were considered. The model describes which affordances associated with paper based procedures should be transferred to computer-based procedures as well as what features should not be incorporated. The model also provides a means to identify what new features not present in paper based procedures need to be added to the computer-based procedures to further enhance performance. The next step is to use the requirements and specifications to develop concepts and prototypes of computer-based procedures. User tests and other data collection efforts will be conducted to ensure that the real issues with field procedures and their usage are being addressed and solved in the best manner possible. This paper describes the baseline study, the construction of the model of procedure use, and the requirements and specifications for computer-based procedures that were developed based on the model. It also addresses how the model and the insights gained from it were used to develop concepts and prototypes for computer based procedures.« less

  11. Modeling of Geometric Error in Linear Guide Way to Improved the vertical three-axis CNC Milling machine’s accuracy

    NASA Astrophysics Data System (ADS)

    Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna

    2018-03-01

    The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.

  12. Automatic detection of new tumors and tumor burden evaluation in longitudinal liver CT scan studies.

    PubMed

    Vivanti, R; Szeskin, A; Lev-Cohain, N; Sosna, J; Joskowicz, L

    2017-11-01

    Radiological longitudinal follow-up of liver tumors in CT scans is the standard of care for disease progression assessment and for liver tumor therapy. Finding new tumors in the follow-up scan is essential to determine malignancy, to evaluate the total tumor burden, and to determine treatment efficacy. Since new tumors are typically small, they may be missed by examining radiologists. We describe a new method for the automatic detection and segmentation of new tumors in longitudinal liver CT studies and for liver tumors burden quantification. Its inputs are the baseline and follow-up CT scans, the baseline tumors delineation, and a tumor appearance prior model. Its outputs are the new tumors segmentations in the follow-up scan, the tumor burden quantification in both scans, and the tumor burden change. Our method is the first comprehensive method that is explicitly designed to find new liver tumors. It integrates information from the scans, the baseline known tumors delineations, and a tumor appearance prior model in the form of a global convolutional neural network classifier. Unlike other deep learning-based methods, it does not require large tagged training sets. Our experimental results on 246 tumors, of which 97 were new tumors, from 37 longitudinal liver CT studies with radiologist approved ground-truth segmentations, yields a true positive new tumors detection rate of 86 versus 72% with stand-alone detection, and a tumor burden volume overlap error of 16%. New tumors detection and tumor burden volumetry are important for diagnosis and treatment. Our new method enables a simplified radiologist-friendly workflow that is potentially more accurate and reliable than the existing one by automatically and accurately following known tumors and detecting new tumors in the follow-up scan.

  13. Optimal estimation of large structure model errors. [in Space Shuttle controller design

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1979-01-01

    In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.

  14. Uncertainty of InSAR velocity fields for measuring long-wavelength displacement

    NASA Astrophysics Data System (ADS)

    Fattahi, H.; Amelung, F.

    2014-12-01

    Long-wavelength artifacts in InSAR data are the main limitation to measure long-wavelength displacement; they are traditionally attributed mainly to the inaccuracy of the satellite orbits (orbital errors). However, most satellites are precisely tracked resulting in uncertainties of orbits of 2-10 cm. Orbits of these satellites are thus precise enough to obtain precise velocity fields with uncertainties better than 1 mm/yr/100 km for older satellites (e.g. Envisat) and better than 0.2 mm/yr/100 km for modern satellites (e.g. TerraSAR-X and Sentinel-1) [Fattahi & Amelung, 2014]. Such accurate velocity fields are achievable if long-wavelength artifacts from sources other than orbital errors are identified and corrected for. We present a modified Small Baseline approach to measure long-wavelength deformation and evaluate the uncertainty of these measurements. We use a redundant network of interferograms for detection and correction of unwrapping errors to ensure the unbiased estimation of phase history. We distinguish between different sources of long-wavelength artifacts and correct those introduced by atmospheric delay, topographic residuals, timing errors, processing approximations and hardware issues. We evaluate the uncertainty of the velocity fields using a covariance matrix with the contributions from orbital errors and residual atmospheric delay. For contributions from the orbital errors we consider the standard deviation of velocity gradients in range and azimuth directions as a function of orbital uncertainty. For contributions from the residual atmospheric delay we use several approaches including the structure functions of InSAR time-series epochs, the predicted delay from numerical weather models and estimated wet delay from optical imagery. We validate this InSAR approach for measuring long-wavelength deformation by comparing InSAR velocity fields over ~500 km long swath across the southern San Andreas fault system with independent GPS velocities and examine the estimated uncertainties in several non-deforming areas. We show the efficiency of the approach to study the continental deformation across the Chaman fault system at the western Indian plate boundary. Ref: Fattahi, H., & Amelung, F., (2014), InSAR uncertainty due to orbital errors, Geophys, J. Int (in press).

  15. Ground-based remote sensing of tropospheric water vapour isotopologues within the project MUSICA

    NASA Astrophysics Data System (ADS)

    Schneider, M.; Barthlott, S.; Hase, F.; González, Y.; Yoshimura, K.; García, O. E.; Sepúlveda, E.; Gomez-Pelaez, A.; Gisi, M.; Kohlhepp, R.; Dohe, S.; Blumenstock, T.; Strong, K.; Weaver, D.; Palm, M.; Deutscher, N. M.; Warneke, T.; Notholt, J.; Lejeune, B.; Demoulin, P.; Jones, N.; Griffith, D. W. T.; Smale, D.; Robinson, J.

    2012-08-01

    Within the project MUSICA (MUlti-platform remote Sensing of Isotopologues for investigating the Cycle of Atmospheric water), long-term tropospheric water vapour isotopologues data records are provided for ten globally distributed ground-based mid-infrared remote sensing stations of the NDACC (Network for the Detection of Atmospheric Composition Change). We present a new method allowing for an extensive and straightforward characterisation of the complex nature of such isotopologue remote sensing datasets. We demonstrate that the MUSICA humidity profiles are representative for most of the troposphere with a vertical resolution ranging from about 2 km (in the lower troposphere) to 8 km (in the upper troposphere) and with an estimated precision of better than 10%. We find that the sensitivity with respect to the isotopologue composition is limited to the lower and middle troposphere, whereby we estimate a precision of about 30‰ for the ratio between the two isotopologues HD16O and H216O. The measurement noise, the applied atmospheric temperature profiles, the uncertainty in the spectral baseline, and interferences from humidity are the leading error sources. We introduce an a posteriori correction method of the humidity interference error and we recommend applying it for isotopologue ratio remote sensing datasets in general. In addition, we present mid-infrared CO2 retrievals and use them for demonstrating the MUSICA network-wide data consistency. In order to indicate the potential of long-term isotopologue remote sensing data if provided with a well-documented quality, we present a climatology and compare it to simulations of an isotope incorporated AGCM (Atmospheric General Circulation Model). We identify differences in the multi-year mean and seasonal cycles that significantly exceed the estimated errors, thereby indicating deficits in the modeled atmospheric water cycle.

  16. Modeling hemodynamics in intracranial aneurysms: Comparing accuracy of CFD solvers based on finite element and finite volume schemes.

    PubMed

    Botti, Lorenzo; Paliwal, Nikhil; Conti, Pierangelo; Antiga, Luca; Meng, Hui

    2018-06-01

    Image-based computational fluid dynamics (CFD) has shown potential to aid in the clinical management of intracranial aneurysms (IAs) but its adoption in the clinical practice has been missing, partially due to lack of accuracy assessment and sensitivity analysis. To numerically solve the flow-governing equations CFD solvers generally rely on two spatial discretization schemes: Finite Volume (FV) and Finite Element (FE). Since increasingly accurate numerical solutions are obtained by different means, accuracies and computational costs of FV and FE formulations cannot be compared directly. To this end, in this study we benchmark two representative CFD solvers in simulating flow in a patient-specific IA model: (1) ANSYS Fluent, a commercial FV-based solver and (2) VMTKLab multidGetto, a discontinuous Galerkin (dG) FE-based solver. The FV solver's accuracy is improved by increasing the spatial mesh resolution (134k, 1.1m, 8.6m and 68.5m tetrahedral element meshes). The dGFE solver accuracy is increased by increasing the degree of polynomials (first, second, third and fourth degree) on the base 134k tetrahedral element mesh. Solutions from best FV and dGFE approximations are used as baseline for error quantification. On average, velocity errors for second-best approximations are approximately 1cm/s for a [0,125]cm/s velocity magnitude field. Results show that high-order dGFE provide better accuracy per degree of freedom but worse accuracy per Jacobian non-zero entry as compared to FV. Cross-comparison of velocity errors demonstrates asymptotic convergence of both solvers to the same numerical solution. Nevertheless, the discrepancy between under-resolved velocity fields suggests that mesh independence is reached following different paths. This article is protected by copyright. All rights reserved.

  17. Grammatical Class Effects Across Impaired Child and Adult Populations

    PubMed Central

    Kambanaros, Maria; Grohmann, Kleanthes K.

    2015-01-01

    The aims of this study are to compare quantitative and qualitative differences for noun/verb retrieval across language-impaired groups, examine naming errors with reference to psycholinguistic models of word processing, and shed light on the nature of the naming deficit as well as determine relevant group commonalities and differences. This includes an attempt to establish whether error types differentiate language-impaired children from adults, to determine effects of psycholinguistic variables on naming accuracies, and to link the results to genetic mechanisms and/or neural circuitry in the brain. A total of 89 (language-)impaired participants took part in this report: 24 adults with acquired aphasia, 20 adults with schizophrenia-spectrum disorder, 31 adults with relapsing-remitting multiple sclerosis, and 14 children with specific language impairment. The results of simultaneous multiple regression analyses for the errors in verb naming compared to the psycholinguistic variables for all language-impaired groups are reported and discussed in relation to models of lexical processing. This discussion will lead to considerations of genetic and/or neurobiological underpinnings: Presence of the noun–verb dissociation in focal and non-focal brain impairment make localization theories redundant, but support for wider neural network involvement.The patterns reported cannot be reduced to any one level of language processing, suggesting multiple interactions at different levels (e.g., receptive vs. expressive language abilities).Semantic-conceptual properties constrain syntactic properties with implications for phonological word form retrieval.Competition needs to be resolved at both conceptual and phonological levels of representation. Moreover, this study may provide a cross-pathological baseline that can be probed further with respect to recent suggestions concerning a reconsideration of open- vs. closed-class items, according to which verbs may actually fall into the latter rather than the standardly received former class. PMID:26635644

  18. Error model for the SAO 1969 standard earth.

    NASA Technical Reports Server (NTRS)

    Martin, C. F.; Roy, N. A.

    1972-01-01

    A method is developed for estimating an error model for geopotential coefficients using satellite tracking data. A single station's apparent timing error for each pass is attributed to geopotential errors. The root sum of the residuals for each station also depends on the geopotential errors, and these are used to select an error model. The model chosen is 1/4 of the difference between the SAO M1 and the APL 3.5 geopotential.

  19. Error correction for IFSAR

    DOEpatents

    Doerry, Armin W.; Bickel, Douglas L.

    2002-01-01

    IFSAR images of a target scene are generated by compensating for variations in vertical separation between collection surfaces defined for each IFSAR antenna by adjusting the baseline projection during image generation. In addition, height information from all antennas is processed before processing range and azimuth information in a normal fashion to create the IFSAR image.

  20. Correction to: Regional differences in baseline disease activity and remission rates following golimumab treatment for RA: results from the GO-MORE trial.

    PubMed

    Durez, Patrick; Pavelka, Karel; Lazaro, Maria Alicia; Garcia-Kutzbach, Abraham; Moots, Robert J; Amital, Howard; Govoni, Marinella; Vastesaeger, Nathan

    2018-05-12

    The original publication contains two areas which require correcting. None of these errors change the results or conclusions of the article, but the authors wish to highlight the areas of change to the reader.

  1. Investigating Linguistic Relativity through Bilingualism: The Case of Grammatical Gender

    ERIC Educational Resources Information Center

    Kousta, Stavroula-Thaleia; Vinson, David P.; Vigliocco, Gabriella

    2008-01-01

    The authors investigated linguistic relativity effects by examining the semantic effects of grammatical gender (present in Italian but absent in English) in fluent bilingual speakers as compared with monolingual speakers. In an error-induction experiment, they used responses by monolingual speakers to establish a baseline for bilingual speakers…

  2. A new universal dynamic model to describe eating rate and cumulative intake curves123

    PubMed Central

    Paynter, Jonathan; Peterson, Courtney M; Heymsfield, Steven B

    2017-01-01

    Background: Attempts to model cumulative intake curves with quadratic functions have not simultaneously taken gustatory stimulation, satiation, and maximal food intake into account. Objective: Our aim was to develop a dynamic model for cumulative intake curves that captures gustatory stimulation, satiation, and maximal food intake. Design: We developed a first-principles model describing cumulative intake that universally describes gustatory stimulation, satiation, and maximal food intake using 3 key parameters: 1) the initial eating rate, 2) the effective duration of eating, and 3) the maximal food intake. These model parameters were estimated in a study (n = 49) where eating rates were deliberately changed. Baseline data was used to determine the quality of model's fit to data compared with the quadratic model. The 3 parameters were also calculated in a second study consisting of restrained and unrestrained eaters. Finally, we calculated when the gustatory stimulation phase is short or absent. Results: The mean sum squared error for the first-principles model was 337.1 ± 240.4 compared with 581.6 ± 563.5 for the quadratic model, or a 43% improvement in fit. Individual comparison demonstrated lower errors for 94% of the subjects. Both sex (P = 0.002) and eating duration (P = 0.002) were associated with the initial eating rate (adjusted R2 = 0.23). Sex was also associated (P = 0.03 and P = 0.012) with the effective eating duration and maximum food intake (adjusted R2 = 0.06 and 0.11). In participants directed to eat as much as they could compared with as much as they felt comfortable with, the maximal intake parameter was approximately double the amount. The model found that certain parameter regions resulted in both stimulation and satiation phases, whereas others only produced a satiation phase. Conclusions: The first-principles model better quantifies interindividual differences in food intake, shows how aspects of food intake differ across subpopulations, and can be applied to determine how eating behavior factors influence total food intake. PMID:28077377

  3. Measurement of baseline and orientation between distributed aerospace platforms.

    PubMed

    Wang, Wen-Qin

    2013-01-01

    Distributed platforms play an important role in aerospace remote sensing, radar navigation, and wireless communication applications. However, besides the requirement of high accurate time and frequency synchronization for coherent signal processing, the baseline between the transmitting platform and receiving platform and the orientation of platform towards each other during data recording must be measured in real time. In this paper, we propose an improved pulsed duplex microwave ranging approach, which allows determining the spatial baseline and orientation between distributed aerospace platforms by the proposed high-precision time-interval estimation method. This approach is novel in the sense that it cancels the effect of oscillator frequency synchronization errors due to separate oscillators that are used in the platforms. Several performance specifications are also discussed. The effectiveness of the approach is verified by simulation results.

  4. Benefits of Pulmonary Rehabilitation in Idiopathic Pulmonary Fibrosis

    PubMed Central

    Swigris, Jeffrey J.; Fairclough, Diane L.; Morrison, Marianne; Make, Barry; Kozora, Elizabeth; Brown, Kevin K.; Wamboldt, Frederick S.

    2013-01-01

    BACKGROUND Information on the benefits of pulmonary rehabilitation (PR) in patients with idiopathic pulmonary fibrosis (IPF) is growing, but PR’s effects on certain important outcomes is lacking. METHODS We conducted a pilot study of PR in IPF and analyzed changes in functional capacity, fatigue, anxiety, depression, sleep, and health status from baseline to after completion of a standard, 6-week PR program. RESULTS Six-min walk distance improved a mean ± standard error 202 ± 135 feet (P = .01) from baseline. Fatigue Severity Scale score also improved significantly, declining an average 1.5 ± 0.5 points from baseline. There were trends toward improvement in anxiety, depression, and health status. CONCLUSIONS PR improves functional capacity and fatigue in patients with IPF. (ClinicalTrials.gov registration NCT00692796.) PMID:21333082

  5. Evaluation of seasonal and spatial variations of lumped water balance model sensitivity to precipitation data errors

    NASA Astrophysics Data System (ADS)

    Xu, Chong-yu; Tunemar, Liselotte; Chen, Yongqin David; Singh, V. P.

    2006-06-01

    Sensitivity of hydrological models to input data errors have been reported in the literature for particular models on a single or a few catchments. A more important issue, i.e. how model's response to input data error changes as the catchment conditions change has not been addressed previously. This study investigates the seasonal and spatial effects of precipitation data errors on the performance of conceptual hydrological models. For this study, a monthly conceptual water balance model, NOPEX-6, was applied to 26 catchments in the Mälaren basin in Central Sweden. Both systematic and random errors were considered. For the systematic errors, 5-15% of mean monthly precipitation values were added to the original precipitation to form the corrupted input scenarios. Random values were generated by Monte Carlo simulation and were assumed to be (1) independent between months, and (2) distributed according to a Gaussian law of zero mean and constant standard deviation that were taken as 5, 10, 15, 20, and 25% of the mean monthly standard deviation of precipitation. The results show that the response of the model parameters and model performance depends, among others, on the type of the error, the magnitude of the error, physical characteristics of the catchment, and the season of the year. In particular, the model appears less sensitive to the random error than to the systematic error. The catchments with smaller values of runoff coefficients were more influenced by input data errors than were the catchments with higher values. Dry months were more sensitive to precipitation errors than were wet months. Recalibration of the model with erroneous data compensated in part for the data errors by altering the model parameters.

  6. Error Propagation in a System Model

    NASA Technical Reports Server (NTRS)

    Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)

    2015-01-01

    Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.

  7. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    DOE PAGES

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-07-14

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less

  8. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less

  9. Stereotype susceptibility narrows the gender gap in imagined self-rotation performance.

    PubMed

    Wraga, Maryjane; Duncan, Lauren; Jacobs, Emily C; Helt, Molly; Church, Jessica

    2006-10-01

    Three studies examined the impact of stereotype messages on men's and women's performance of a mental rotation task involving imagined self-rotations. Experiment 1 established baseline differences between men and women; women made 12% more errors than did men. Experiment 2 found that exposure to a positive stereotype message enhanced women's performance in comparison with that of another group of women who received neutral information. In Experiment 3, men who were exposed to the same stereotype message emphasizing a female advantage made more errors than did male controls, and the magnitude of error was similar to that for women from Experiment 1. The results suggest that the gender gap in mental rotation performance is partially caused by experiential factors, particularly those induced by sociocultural stereotypes.

  10. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method

    PubMed Central

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-01-01

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis. PMID:28029121

  11. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.

    PubMed

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-12-24

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  12. A map overlay error model based on boundary geometry

    USGS Publications Warehouse

    Gaeuman, D.; Symanzik, J.; Schmidt, J.C.

    2005-01-01

    An error model for quantifying the magnitudes and variability of errors generated in the areas of polygons during spatial overlay of vector geographic information system layers is presented. Numerical simulation of polygon boundary displacements was used to propagate coordinate errors to spatial overlays. The model departs from most previous error models in that it incorporates spatial dependence of coordinate errors at the scale of the boundary segment. It can be readily adapted to match the scale of error-boundary interactions responsible for error generation on a given overlay. The area of error generated by overlay depends on the sinuosity of polygon boundaries, as well as the magnitude of the coordinate errors on the input layers. Asymmetry in boundary shape has relatively little effect on error generation. Overlay errors are affected by real differences in boundary positions on the input layers, as well as errors in the boundary positions. Real differences between input layers tend to compensate for much of the error generated by coordinate errors. Thus, the area of change measured on an overlay layer produced by the XOR overlay operation will be more accurate if the area of real change depicted on the overlay is large. The model presented here considers these interactions, making it especially useful for estimating errors studies of landscape change over time. ?? 2005 The Ohio State University.

  13. The effects of multiple aerospace environmental stressors on human performance

    NASA Technical Reports Server (NTRS)

    Popper, S. E.; Repperger, D. W.; Mccloskey, K.; Tripp, L. D.

    1992-01-01

    An extended Fitt's law paradigm reaction time (RT) task was used to evaluate the effects of acceleration on human performance in the Dynamic Environment Simulator (DES) at Armstrong Laboratory, Wright-Patterson AFB, Ohio. This effort was combined with an evaluation of the standard CSU-13 P anti-gravity suit versus three configurations of a 'retrograde inflation anti-G suit'. Results indicated that RT and error rates increased 17 percent and 14 percent respectively from baseline to the end of the simulated aerial combat maneuver and that the most common error was pressing too few buttons.

  14. VizieR Online Data Catalog: delta Cep VEGA/CHARA observing log (Nardetto+, 2016)

    NASA Astrophysics Data System (ADS)

    Nardetto, N.; Merand, A.; Mourard, D.; Storm, J.; Gieren, W.; Fouque, P.; Gallenne, A.; Graczyk, D.; Kervella, P.; Neilson, H.; Pietrzynski, G.; Pilecki, B.; Breitfelder, J.; Berio, P.; Challouf, M.; Clausse, J.-M.; Ligi, R.; Mathias, P.; Meilland, A.; Perraut, K.; Poretti, E.; Rainer, M.; Spang, A.; Stee, P.; Tallon-Bosc, I.; Ten Brummelaar, T.

    2016-07-01

    The columns give, respectively, the date, the RJD, the hour angle (HA), the minimum and maximum wavelengths over which the squared visibility is calculated, the projected baseline length Bp and its orientation PA, the signal-to-noise ratio on the fringe peak; the last column provides the calibrated squared visibility V2 together with the statistic error on V2, and the systematic error on V2 (see text for details). The data are available on the Jean-Marie Mariotti Center OiDB service (Available at http://oidb.jmmc.fr). (1 data file).

  15. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  16. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  17. Econometrically calibrated computable general equilibrium models: Applications to the analysis of energy and climate politics

    NASA Astrophysics Data System (ADS)

    Schu, Kathryn L.

    Economy-energy-environment models are the mainstay of economic assessments of policies to reduce carbon dioxide (CO2) emissions, yet their empirical basis is often criticized as being weak. This thesis addresses these limitations by constructing econometrically calibrated models in two policy areas. The first is a 35-sector computable general equilibrium (CGE) model of the U.S. economy which analyzes the uncertain impacts of CO2 emission abatement. Econometric modeling of sectors' nested constant elasticity of substitution (CES) cost functions based on a 45-year price-quantity dataset yields estimates of capital-labor-energy-material input substitution elasticities and biases of technical change that are incorporated into the CGE model. I use the estimated standard errors and variance-covariance matrices to construct the joint distribution of the parameters of the economy's supply side, which I sample to perform Monte Carlo baseline and counterfactual runs of the model. The resulting probabilistic abatement cost estimates highlight the importance of the uncertainty in baseline emissions growth. The second model is an equilibrium simulation of the market for new vehicles which I use to assess the response of vehicle prices, sales and mileage to CO2 taxes and increased corporate average fuel economy (CAFE) standards. I specify an econometric model of a representative consumer's vehicle preferences using a nested CES expenditure function which incorporates mileage and other characteristics in addition to prices, and develop a novel calibration algorithm to link this structure to vehicle model supplies by manufacturers engaged in Bertrand competition. CO2 taxes' effects on gasoline prices reduce vehicle sales and manufacturers' profits if vehicles' mileage is fixed, but these losses shrink once mileage can be adjusted. Accelerated CAFE standards induce manufacturers to pay fines for noncompliance rather than incur the higher costs of radical mileage improvements. Neither policy induces major increases in fuel economy.

  18. High Resolution Digital Surface Model For Production Of Airport Obstruction Charts Using Spaceborne SAR Sensors

    NASA Astrophysics Data System (ADS)

    Oliveira, Henrique; Rodrigues, Marco; Radius, Andrea

    2012-01-01

    Airport Obstruction Charts (AOCs) are graphical representations of natural or man-made obstructions (its locations and heights) around airfields, according to International Civil Aviation Organization (ICAO) Annexes 4, 14 and 15. One of the most important types of data used in AOCs production/update tasks is a Digital Surface Model (first reflective surface) of the surveyed area. The development of advanced remote sensing technologies provide the available tools for obstruction data acquisition, while Geographic Information Systems (GIS) present the perfect platform for storing and analyzing this type of data, enabling the production of digital ACOs, greatly contributing to the increase of the situational awareness of pilots and enhancing the air navigation safety level [1]. Data acquisition corresponding to the first reflective surface can be obtained through the use of Airborne Laser-Scanning and Light Detection and Ranging (ALS/LIDAR) or Spaceborne SAR Systems. The need of surveying broad areas, like the entire territory of a state, shows that Spaceborne SAR systems are the most adequate in economic and feasibility terms of the process, to perform the monitoring and producing a high resolution Digital Surface Model (DSM). The high resolution DSM generation depends on many factors: the available data set, the used technique and the setting parameters. To increase the precision and obtain high resolution products, two techniques are available using a stack of data: the PS (Permanent Scatterers) technique [2], that uses large stack of data to identify many stable and coherent targets through multi- temporal analysis, removing the atmospheric contribution and to minimize the estimation errors, and the Small Baseline Subset (SBAS) technique ([3],[4]), that relies on the use of small baseline SAR interferograms and on the application of the so called singular value decomposition (SVD) method, in order to link independent SAR acquisition data sets, separated by large baselines, thus increasing the number of data used for the analysis.

  19. A Final Approach Trajectory Model for Current Operations

    NASA Technical Reports Server (NTRS)

    Gong, Chester; Sadovsky, Alexander

    2010-01-01

    Predicting accurate trajectories with limited intent information is a challenge faced by air traffic management decision support tools in operation today. One such tool is the FAA's Terminal Proximity Alert system which is intended to assist controllers in maintaining safe separation of arrival aircraft during final approach. In an effort to improve the performance of such tools, two final approach trajectory models are proposed; one based on polynomial interpolation, the other on the Fourier transform. These models were tested against actual traffic data and used to study effects of the key final approach trajectory modeling parameters of wind, aircraft type, and weight class, on trajectory prediction accuracy. Using only the limited intent data available to today's ATM system, both the polynomial interpolation and Fourier transform models showed improved trajectory prediction accuracy over a baseline dead reckoning model. Analysis of actual arrival traffic showed that this improved trajectory prediction accuracy leads to improved inter-arrival separation prediction accuracy for longer look ahead times. The difference in mean inter-arrival separation prediction error between the Fourier transform and dead reckoning models was 0.2 nmi for a look ahead time of 120 sec, a 33 percent improvement, with a corresponding 32 percent improvement in standard deviation.

  20. Developing models of how cognitive improvements change functioning: Mediation, moderation and moderated mediation

    PubMed Central

    Wykes, Til; Reeder, Clare; Huddy, Vyv; Taylor, Rumina; Wood, Helen; Ghirasim, Natalia; Kontis, Dimitrios; Landau, Sabine

    2012-01-01

    Background Cognitive remediation (CRT) affects functioning but the extent and type of cognitive improvements necessary are unknown. Aim To develop and test models of how cognitive improvement transfers to work behaviour using the data from a current service. Method Participants (N49) with a support worker and a paid or voluntary job were offered CRT in a Phase 2 single group design with three assessments: baseline, post therapy and follow-up. Working memory, cognitive flexibility, planning and work outcomes were assessed. Results Three models were tested (mediation — cognitive improvements drive functioning improvement; moderation — post treatment cognitive level affects the impact of CRT on functioning; moderated mediation — cognition drives functioning improvements only after a certain level is achieved). There was evidence of mediation (planning improvement associated with improved work quality). There was no evidence that cognitive flexibility (total Wisconsin Card Sorting Test errors) and working memory (Wechsler Adult Intelligence Scale III digit span) mediated work functioning despite significant effects. There was some evidence of moderated mediation for planning improvement if participants had poorer memory and/or made fewer WCST errors. The total CRT effect on work quality was d = 0.55, but the indirect (planning-mediated CRT effect) was d = 0.082 Conclusion Planning improvements led to better work quality but only accounted for a small proportion of the total effect on work outcome. Other specific and non-specific effects of CRT and the work programme are likely to account for some of the remaining effect. This is the first time complex models have been tested and future Phase 3 studies need to further test mediation and moderated mediation models. PMID:22503640

  1. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals

    PubMed Central

    Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G.

    2016-01-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors’ previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp–p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat. PMID:27382478

  2. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.

    PubMed

    Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G

    2016-06-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.

  3. Regionalized PM2.5 Community Multiscale Air Quality model performance evaluation across a continuous spatiotemporal domain.

    PubMed

    Reyes, Jeanette M; Xu, Yadong; Vizuete, William; Serre, Marc L

    2017-01-01

    The regulatory Community Multiscale Air Quality (CMAQ) model is a means to understanding the sources, concentrations and regulatory attainment of air pollutants within a model's domain. Substantial resources are allocated to the evaluation of model performance. The Regionalized Air quality Model Performance (RAMP) method introduced here explores novel ways of visualizing and evaluating CMAQ model performance and errors for daily Particulate Matter ≤ 2.5 micrometers (PM2.5) concentrations across the continental United States. The RAMP method performs a non-homogenous, non-linear, non-homoscedastic model performance evaluation at each CMAQ grid. This work demonstrates that CMAQ model performance, for a well-documented 2001 regulatory episode, is non-homogeneous across space/time. The RAMP correction of systematic errors outperforms other model evaluation methods as demonstrated by a 22.1% reduction in Mean Square Error compared to a constant domain wide correction. The RAMP method is able to accurately reproduce simulated performance with a correlation of r = 76.1%. Most of the error coming from CMAQ is random error with only a minority of error being systematic. Areas of high systematic error are collocated with areas of high random error, implying both error types originate from similar sources. Therefore, addressing underlying causes of systematic error will have the added benefit of also addressing underlying causes of random error.

  4. Finite element structural redesign by large admissible perturbations

    NASA Technical Reports Server (NTRS)

    Bernitsas, Michael M.; Beyko, E.; Rim, C. W.; Alzahabi, B.

    1991-01-01

    In structural redesign, two structural states are involved; the baseline (known) State S1 with unacceptable performance, and the objective (unknown) State S2 with given performance specifications. The difference between the two states in performance and design variables may be as high as 100 percent or more depending on the scale of the structure. A Perturbation Approach to Redesign (PAR) is presented to relate any two structural states S1 and S2 that are modeled by the same finite element model and represented by different values of the design variables. General perturbation equations are derived expressing implicitly the natural frequencies, dynamic modes, static deflections, static stresses, Euler buckling loads, and buckling modes of the objective S2 in terms of its performance specifications, and S1 data and Finite Element Analysis (FEA) results. Large Admissible Perturbation (LEAP) algorithms are implemented in code RESTRUCT to define the objective S2 incrementally without trial and error by postprocessing FEA results of S1 with no additional FEAs. Systematic numerical applications in redesign of a 10 element 48 degree of freedom (dof) beam, a 104 element 192 dof offshore tower, a 64 element 216 dof plate, and a 144 element 896 dof cylindrical shell show the accuracy, efficiency, and potential of PAR to find an objective state that may differ 100 percent from the baseline design.

  5. Poststroke Fatigue: Who Is at Risk for an Increase in Fatigue?

    PubMed Central

    van Eijsden, Hanna Maria; van de Port, Ingrid Gerrie Lambert; Visser-Meily, Johanna Maria August; Kwakkel, Gert

    2012-01-01

    Background. Several studies have examined determinants related to post-stroke fatigue. However, it is unclear which determinants can predict an increase in poststroke fatigue over time. Aim. This prospective cohort study aimed to identify determinants which predict an increase in post-stroke fatigue. Methods. A total of 250 patients with stroke were examined at inpatient rehabilitation discharge (T0) and 24 weeks later (T1). Fatigue was measured using the Fatigue Severity Scale (FSS). An increase in post-stroke fatigue was defined as an increase in the FSS score beyond the 95% limits of the standard error of measurement of the FSS (i.e., 1.41 points) between T0 and T1. Candidate determinants included personal factors, stroke characteristics, physical, cognitive, and emotional functions, and activities and participation and were assessed at T0. Factors predicting an increase in fatigue were identified using forward multivariate logistic regression analysis. Results. The only independent predictor of an increase in post-stroke fatigue was FSS (OR 0.50; 0.38–0.64, P < 0.001). The model including FSS at baseline correctly predicted 7.9% of the patients who showed increased fatigue at T1. Conclusion. The prognostic model to predict an increase in fatigue after stroke has limited predictive value, but baseline fatigue is the most important independent predictor. Overall, fatigue levels remained stable over time. PMID:22028989

  6. Modeling human response errors in synthetic flight simulator domain

    NASA Technical Reports Server (NTRS)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  7. Numerical Error Estimation with UQ

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Korn, Peter; Marotzke, Jochem

    2014-05-01

    Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted

  8. The Error Structure of the SMAP Single and Dual Channel Soil Moisture Retrievals

    NASA Astrophysics Data System (ADS)

    Dong, Jianzhi; Crow, Wade T.; Bindlish, Rajat

    2018-01-01

    Knowledge of the temporal error structure for remotely sensed surface soil moisture retrievals can improve our ability to exploit them for hydrologic and climate studies. This study employs a triple collocation analysis to investigate both the total variance and temporal autocorrelation of errors in Soil Moisture Active and Passive (SMAP) products generated from two separate soil moisture retrieval algorithms, the vertically polarized brightness temperature-based single-channel algorithm (SCA-V, the current baseline SMAP algorithm) and the dual-channel algorithm (DCA). A key assumption made in SCA-V is that real-time vegetation opacity can be accurately captured using only a climatology for vegetation opacity. Results demonstrate that while SCA-V generally outperforms DCA, SCA-V can produce larger total errors when this assumption is significantly violated by interannual variability in vegetation health and biomass. Furthermore, larger autocorrelated errors in SCA-V retrievals are found in areas with relatively large vegetation opacity deviations from climatological expectations. This implies that a significant portion of the autocorrelated error in SCA-V is attributable to the violation of its vegetation opacity climatology assumption and suggests that utilizing a real (as opposed to climatological) vegetation opacity time series in the SCA-V algorithm would reduce the magnitude of autocorrelated soil moisture retrieval errors.

  9. Treatment mechanism in the MRC preschool autism communication trial: implications for study design and parent-focussed therapy for children.

    PubMed

    Pickles, Andrew; Harris, Victoria; Green, Jonathan; Aldred, Catherine; McConachie, Helen; Slonims, Vicky; Le Couteur, Ann; Hudry, Kristelle; Charman, Tony

    2015-02-01

    The PACT randomised-controlled trial evaluated a parent-mediated communication-focused treatment for children with autism, intended to reduce symptom severity as measured by a modified Autism Diagnostic Observation Schedule-Generic (ADOS-G) algorithm score. The therapy targeted parental behaviour, with no direct interaction between therapist and child. While nonsignificant group differences were found on ADOS-G score, significant group differences were found for both parent and child intermediate outcomes. This study aimed to better understand the mechanism by which the PACT treatment influenced changes in child behaviour though the targeted parent behaviour. Mediation analysis was used to assess the direct and indirect effects of treatment via parent behaviour on child behaviour and via child behaviour on ADOS-G score. Alternative mediation was explored to study whether the treatment effect acted as hypothesised or via another plausible pathway. Mediation models typically assume no unobserved confounding between mediator and outcome and no measurement error in the mediator. We show how to better exploit the information often available within a trial to begin to address these issues, examining scope for instrumental variable and measurement error models. Estimates of mediation changed substantially when account was taken of the confounder effects of the baseline value of the mediator and of measurement error. Our best estimates that accounted for both suggested that the treatment effect on the ADOS-G score was very substantially mediated by parent synchrony and child initiations. The results highlighted the value of repeated measurement of mediators during trials. The theoretical model underlying the PACT treatment was supported. However, the substantial fall-off in treatment effect highlighted both the need for additional data and for additional target behaviours for therapy. © 2014 The Authors. Journal of Child Psychology and Psychiatry. © 2014 Association for Child and Adolescent Mental Health.

  10. A fast referenceless PRFS-based MR thermometry by phase finite difference

    NASA Astrophysics Data System (ADS)

    Zou, Chao; Shen, Huan; He, Mengyue; Tie, Changjun; Chung, Yiu-Cho; Liu, Xin

    2013-08-01

    Proton resonance frequency shift-based MR thermometry is a promising temperature monitoring approach for thermotherapy but its accuracy is vulnerable to inter-scan motion. Model-based referenceless thermometry has been proposed to address this problem but phase unwrapping is usually needed before the model fitting process. In this paper, a referenceless MR thermometry method using phase finite difference that avoids the time consuming phase unwrapping procedure is proposed. Unlike the previously proposed phase gradient technique, the use of finite difference in the new method reduces the fitting error resulting from the ringing artifacts associated with phase discontinuity in the calculation of the phase gradient image. The new method takes into account the values at the perimeter of the region of interest because of their direct relevance to the extrapolated baseline phase of the region of interest (where temperature increase takes place). In simulation study, in vivo and ex vivo experiments, the new method has a root-mean-square temperature error of 0.35 °C, 1.02 °C and 1.73 °C compared to 0.83 °C, 2.81 °C, and 3.76 °C from the phase gradient method, respectively. The method also demonstrated a slightly higher, albeit small, temperature accuracy than the original referenceless MR thermometry method. The proposed method is computationally efficient (∼0.1 s per image), making it very suitable for the real time temperature monitoring.

  11. Estimating Prediction Uncertainty from Geographical Information System Raster Processing: A User's Manual for the Raster Error Propagation Tool (REPTool)

    USGS Publications Warehouse

    Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.

    2009-01-01

    The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.

  12. Unwrapping eddy current compensation: improved compensation of eddy current induced baseline shifts in high-resolution phase-contrast MRI at 9.4 Tesla.

    PubMed

    Espe, Emil K S; Zhang, Lili; Sjaastad, Ivar

    2014-10-01

    Phase-contrast MRI (PC-MRI) is a versatile tool allowing evaluation of in vivo motion, but is sensitive to eddy current induced phase offsets, causing errors in the measured velocities. In high-resolution PC-MRI, these offsets can be sufficiently large to cause wrapping in the baseline phase, rendering conventional eddy current compensation (ECC) inadequate. The purpose of this study was to develop an improved ECC technique (unwrapping ECC) able to handle baseline phase discontinuities. Baseline phase discontinuities are unwrapped by minimizing the spatiotemporal standard deviation of the static-tissue phase. Computer simulations were used for demonstrating the theoretical foundation of the proposed technique. The presence of baseline wrapping was confirmed in high-resolution myocardial PC-MRI of a normal rat heart at 9.4 Tesla (T), and the performance of unwrapping ECC was compared with conventional ECC. Areas of phase wrapping in static regions were clearly evident in high-resolution PC-MRI. The proposed technique successfully eliminated discontinuities in the baseline, and resulted in significantly better ECC than the conventional approach. We report the occurrence of baseline phase wrapping in PC-MRI, and provide an improved ECC technique capable of handling its presence. Unwrapping ECC offers improved correction of eddy current induced baseline shifts in high-resolution PC-MRI. Copyright © 2013 Wiley Periodicals, Inc.

  13. Embedded Model Error Representation and Propagation in Climate Models

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.

    2017-12-01

    Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.

  14. Descriptive Analysis of a Baseline Concussion Battery Among U.S. Service Academy Members: Results from the Concussion Assessment, Research, and Education (CARE) Consortium.

    PubMed

    O'Connor, Kathryn L; Dain Allred, C; Cameron, Kenneth L; Campbell, Darren E; D'Lauro, Christopher J; Houston, Megan N; Johnson, Brian R; Kelly, Tim F; McGinty, Gerald; O'Donnell, Patrick G; Peck, Karen Y; Svoboda, Steven J; Pasquina, Paul; McAllister, Thomas; McCrea, Michael; Broglio, Steven P

    2018-03-28

    The prevalence and possible long-term consequences of concussion remain an increasing concern to the U.S. military, particularly as it pertains to maintaining a medically ready force. Baseline testing is being used both in the civilian and military domains to assess concussion injury and recovery. Accurate interpretation of these baseline assessments requires one to consider other influencing factors not related to concussion. To date, there is limited understanding, especially within the military, of what factors influence normative test performance. Given the significant physical and mental demands placed on service academy members (SAM), and their relatively high risk for concussion, it is important to describe demographics and normative profile of SAMs. Furthermore, the absence of available baseline normative data on female and non-varsity SAMs makes interpretation of post-injury assessments challenging. Understanding how individuals perform at baseline, given their unique individual characteristics (e.g., concussion history, sex, competition level), will inform post-concussion assessment and management. Thus, the primary aim of this manuscript is to characterize the SAM population and determine normative values on a concussion baseline testing battery. All data were collected as part of the Concussion Assessment, Research and Education (CARE) Consortium. The baseline test battery included a post-concussion symptom checklist (Sport Concussion Assessment Tool (SCAT), psychological health screening inventory (Brief Symptom Inventory (BSI-18) and neurocognitive evaluation (ImPACT), Balance Error Scoring System (BESS), and Standardized Assessment of Concussion (SAC). Linear regression models were used to examine differences across sexes, competition levels, and varsity contact levels while controlling for academy, freshman status, race, and previous concussion. Zero inflated negative binomial models estimated symptom scores due to the high frequency of zero scores. Significant, but small, sex effects were observed on the ImPACT visual memory task. While, females performed worse than males (p < 0.0001, pη2 = 0.01), these differences were small and not larger than the effects of the covariates. A similar pattern was observed for competition level on the SAC. There was a small, but significant difference across competition level. SAMs participating in varsity athletics did significantly worse on the SAC compared to SAMs participating in club or intramural athletics (all p's < 0.001, η2 = 0.01). When examining symptom reporting, males were more than two times as likely to report zero symptoms on the SCAT or BSI-18. Intramural SAMs had the highest number of symptoms and severity compared to varsity SAMs (p < 0.0001, Cohen's d < 0.2). Contact level was not associated with SCAT or BSI-18 symptoms among varsity SAMs. Notably, the significant differences across competition level on SCAT and BSI-18 were sub-clinical and had small effect sizes. The current analyses provide the first baseline concussion battery normative data among SAMs. While statistically significant differences may be observed on baseline tests, the effect sizes for competition and contact levels are very small, indicating that differences are likely not clinically meaningful at baseline. Identifying baseline differences and significant covariates is important for future concussion-related analyses to inform concussion evaluations for all athlete levels.

  15. ERM model analysis for adaptation to hydrological model errors

    NASA Astrophysics Data System (ADS)

    Baymani-Nezhad, M.; Han, D.

    2018-05-01

    Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.

  16. Unit of Measurement Used and Parent Medication Dosing Errors

    PubMed Central

    Dreyer, Benard P.; Ugboaja, Donna C.; Sanchez, Dayana C.; Paul, Ian M.; Moreira, Hannah A.; Rodriguez, Luis; Mendelsohn, Alan L.

    2014-01-01

    BACKGROUND AND OBJECTIVES: Adopting the milliliter as the preferred unit of measurement has been suggested as a strategy to improve the clarity of medication instructions; teaspoon and tablespoon units may inadvertently endorse nonstandard kitchen spoon use. We examined the association between unit used and parent medication errors and whether nonstandard instruments mediate this relationship. METHODS: Cross-sectional analysis of baseline data from a larger study of provider communication and medication errors. English- or Spanish-speaking parents (n = 287) whose children were prescribed liquid medications in 2 emergency departments were enrolled. Medication error defined as: error in knowledge of prescribed dose, error in observed dose measurement (compared to intended or prescribed dose); >20% deviation threshold for error. Multiple logistic regression performed adjusting for parent age, language, country, race/ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease; site. RESULTS: Medication errors were common: 39.4% of parents made an error in measurement of the intended dose, 41.1% made an error in the prescribed dose. Furthermore, 16.7% used a nonstandard instrument. Compared with parents who used milliliter-only, parents who used teaspoon or tablespoon units had twice the odds of making an error with the intended (42.5% vs 27.6%, P = .02; adjusted odds ratio=2.3; 95% confidence interval, 1.2–4.4) and prescribed (45.1% vs 31.4%, P = .04; adjusted odds ratio=1.9; 95% confidence interval, 1.03–3.5) dose; associations greater for parents with low health literacy and non–English speakers. Nonstandard instrument use partially mediated teaspoon and tablespoon–associated measurement errors. CONCLUSIONS: Findings support a milliliter-only standard to reduce medication errors. PMID:25022742

  17. Unit of measurement used and parent medication dosing errors.

    PubMed

    Yin, H Shonna; Dreyer, Benard P; Ugboaja, Donna C; Sanchez, Dayana C; Paul, Ian M; Moreira, Hannah A; Rodriguez, Luis; Mendelsohn, Alan L

    2014-08-01

    Adopting the milliliter as the preferred unit of measurement has been suggested as a strategy to improve the clarity of medication instructions; teaspoon and tablespoon units may inadvertently endorse nonstandard kitchen spoon use. We examined the association between unit used and parent medication errors and whether nonstandard instruments mediate this relationship. Cross-sectional analysis of baseline data from a larger study of provider communication and medication errors. English- or Spanish-speaking parents (n = 287) whose children were prescribed liquid medications in 2 emergency departments were enrolled. Medication error defined as: error in knowledge of prescribed dose, error in observed dose measurement (compared to intended or prescribed dose); >20% deviation threshold for error. Multiple logistic regression performed adjusting for parent age, language, country, race/ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease; site. Medication errors were common: 39.4% of parents made an error in measurement of the intended dose, 41.1% made an error in the prescribed dose. Furthermore, 16.7% used a nonstandard instrument. Compared with parents who used milliliter-only, parents who used teaspoon or tablespoon units had twice the odds of making an error with the intended (42.5% vs 27.6%, P = .02; adjusted odds ratio=2.3; 95% confidence interval, 1.2-4.4) and prescribed (45.1% vs 31.4%, P = .04; adjusted odds ratio=1.9; 95% confidence interval, 1.03-3.5) dose; associations greater for parents with low health literacy and non-English speakers. Nonstandard instrument use partially mediated teaspoon and tablespoon-associated measurement errors. Findings support a milliliter-only standard to reduce medication errors. Copyright © 2014 by the American Academy of Pediatrics.

  18. Trial-to-trial adaptation in control of arm reaching and standing posture

    PubMed Central

    Pienciak-Siewert, Alison; Horan, Dylan P.

    2016-01-01

    Classical theories of motor learning hypothesize that adaptation is driven by sensorimotor error; this is supported by studies of arm and eye movements that have shown that trial-to-trial adaptation increases with error. Studies of postural control have shown that anticipatory postural adjustments increase with the magnitude of a perturbation. However, differences in adaptation have been observed between the two modalities, possibly due to either the inherent instability or sensory uncertainty in standing posture. Therefore, we hypothesized that trial-to-trial adaptation in posture should be driven by error, similar to what is observed in arm reaching, but the nature of the relationship between error and adaptation may differ. Here we investigated trial-to-trial adaptation of arm reaching and postural control concurrently; subjects made reaching movements in a novel dynamic environment of varying strengths, while standing and holding the handle of a force-generating robotic arm. We found that error and adaptation increased with perturbation strength in both arm and posture. Furthermore, in both modalities, adaptation showed a significant correlation with error magnitude. Our results indicate that adaptation scales proportionally with error in the arm and near proportionally in posture. In posture only, adaptation was not sensitive to small error sizes, which were similar in size to errors experienced in unperturbed baseline movements due to inherent variability. This finding may be explained as an effect of uncertainty about the source of small errors. Our findings suggest that in rehabilitation, postural error size should be considered relative to the magnitude of inherent movement variability. PMID:27683888

  19. Trial-to-trial adaptation in control of arm reaching and standing posture.

    PubMed

    Pienciak-Siewert, Alison; Horan, Dylan P; Ahmed, Alaa A

    2016-12-01

    Classical theories of motor learning hypothesize that adaptation is driven by sensorimotor error; this is supported by studies of arm and eye movements that have shown that trial-to-trial adaptation increases with error. Studies of postural control have shown that anticipatory postural adjustments increase with the magnitude of a perturbation. However, differences in adaptation have been observed between the two modalities, possibly due to either the inherent instability or sensory uncertainty in standing posture. Therefore, we hypothesized that trial-to-trial adaptation in posture should be driven by error, similar to what is observed in arm reaching, but the nature of the relationship between error and adaptation may differ. Here we investigated trial-to-trial adaptation of arm reaching and postural control concurrently; subjects made reaching movements in a novel dynamic environment of varying strengths, while standing and holding the handle of a force-generating robotic arm. We found that error and adaptation increased with perturbation strength in both arm and posture. Furthermore, in both modalities, adaptation showed a significant correlation with error magnitude. Our results indicate that adaptation scales proportionally with error in the arm and near proportionally in posture. In posture only, adaptation was not sensitive to small error sizes, which were similar in size to errors experienced in unperturbed baseline movements due to inherent variability. This finding may be explained as an effect of uncertainty about the source of small errors. Our findings suggest that in rehabilitation, postural error size should be considered relative to the magnitude of inherent movement variability. Copyright © 2016 the American Physiological Society.

  20. Use of models to map potential capture of surface water

    USGS Publications Warehouse

    Leake, Stanley A.

    2006-01-01

    The effects of ground-water withdrawals on surface-water resources and riparian vegetation have become important considerations in water-availability studies. Ground water withdrawn by a well initially comes from storage around the well, but with time can eventually increase inflow to the aquifer and (or) decrease natural outflow from the aquifer. This increased inflow and decreased outflow is referred to as “capture.” For a given time, capture can be expressed as a fraction of withdrawal rate that is accounted for as increased rates of inflow and decreased rates of outflow. The time frames over which capture might occur at different locations commonly are not well understood by resource managers. A ground-water model, however, can be used to map potential capture for areas and times of interest. The maps can help managers visualize the possible timing of capture over large regions. The first step in the procedure to map potential capture is to run a ground-water model in steady-state mode without withdrawals to establish baseline total flow rates at all sources and sinks. The next step is to select a time frame and appropriate withdrawal rate for computing capture. For regional aquifers, time frames of decades to centuries may be appropriate. The model is then run repeatedly in transient mode, each run with one well in a different model cell in an area of interest. Differences in inflow and outflow rates from the baseline conditions for each model run are computed and saved. The differences in individual components are summed and divided by the withdrawal rate to obtain a single capture fraction for each cell. Values are contoured to depict capture fractions for the time of interest. Considerations in carrying out the analysis include use of realistic physical boundaries in the model, understanding the degree of linearity of the model, selection of an appropriate time frame and withdrawal rate, and minimizing error in the global mass balance of the model.

  1. WE-D-18A-01: Evaluation of Three Commercial Metal Artifact Reduction Methods for CT Simulations in Radiation Therapy Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, J; Kerns, J; Nute, J

    Purpose: To evaluate three commercial metal artifact reduction methods (MAR) in the context of radiation therapy treatment planning. Methods: Three MAR strategies were evaluated: Philips O-MAR, monochromatic imaging using Gemstone Spectral Imaging (GSI) dual energy CT, and monochromatic imaging with metal artifact reduction software (GSIMARs). The Gammex RMI 467 tissue characterization phantom with several metal rods and two anthropomorphic phantoms (pelvic phantom with hip prosthesis and head phantom with dental fillings), were scanned with and without (baseline) metals. Each MAR method was evaluated based on CT number accuracy, metal size accuracy, and reduction in the severity of streak artifacts. CTmore » number difference maps between the baseline and metal scan images were calculated, and the severity of streak artifacts was quantified using the percentage of pixels with >40 HU error (“bad pixels”). Results: Philips O-MAR generally reduced HU errors in the RMI phantom. However, increased errors and induced artifacts were observed for lung materials. GSI monochromatic 70keV images generally showed similar HU errors as 120kVp imaging, while 140keV images reduced errors. GSI-MARs systematically reduced errors compared to GSI monochromatic imaging. All imaging techniques preserved the diameter of a stainless steel rod to within ±1.6mm (2 pixels). For the hip prosthesis, O-MAR reduced the average % bad pixels from 47% to 32%. For GSI 140keV imaging, the percent of bad pixels was reduced from 37% to 29% compared to 120kVp imaging, while GSI-MARs further reduced it to 12%. For the head phantom, none of the MAR methods were particularly successful. Conclusion: The three MAR methods all improve CT images for treatment planning to some degree, but none of them are globally effective for all conditions. The MAR methods were successful for large metal implants in a homogeneous environment (hip prosthesis) but were not successful for the more complicated case of dental artifacts.« less

  2. Impact of geophysical model error for recovering temporal gravity field model

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Luo, Zhicai; Wu, Yihao; Li, Qiong; Xu, Chuang

    2016-07-01

    The impact of geophysical model error on recovered temporal gravity field models with both real and simulated GRACE observations is assessed in this paper. With real GRACE observations, we build four temporal gravity field models, i.e., HUST08a, HUST11a, HUST04 and HUST05. HUST08a and HUST11a are derived from different ocean tide models (EOT08a and EOT11a), while HUST04 and HUST05 are derived from different non-tidal models (AOD RL04 and AOD RL05). The statistical result shows that the discrepancies of the annual mass variability amplitudes in six river basins between HUST08a and HUST11a models, HUST04 and HUST05 models are all smaller than 1 cm, which demonstrates that geophysical model error slightly affects the current GRACE solutions. The impact of geophysical model error for future missions with more accurate satellite ranging is also assessed by simulation. The simulation results indicate that for current mission with range rate accuracy of 2.5 × 10- 7 m/s, observation error is the main reason for stripe error. However, when the range rate accuracy improves to 5.0 × 10- 8 m/s in the future mission, geophysical model error will be the main source for stripe error, which will limit the accuracy and spatial resolution of temporal gravity model. Therefore, observation error should be the primary error source taken into account at current range rate accuracy level, while more attention should be paid to improving the accuracy of background geophysical models for the future mission.

  3. Ensemble-type numerical uncertainty information from single model integrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter

    2015-07-01

    We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less

  4. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    PubMed

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  5. Quantifying the impact of material-model error on macroscale quantities-of-interest using multiscale a posteriori error-estimation techniques

    DOE PAGES

    Brown, Judith A.; Bishop, Joseph E.

    2016-07-20

    An a posteriori error-estimation framework is introduced to quantify and reduce modeling errors resulting from approximating complex mesoscale material behavior with a simpler macroscale model. Such errors may be prevalent when modeling welds and additively manufactured structures, where spatial variations and material textures may be present in the microstructure. We consider a case where a <100> fiber texture develops in the longitudinal scanning direction of a weld. Transversely isotropic elastic properties are obtained through homogenization of a microstructural model with this texture and are considered the reference weld properties within the error-estimation framework. Conversely, isotropic elastic properties are considered approximatemore » weld properties since they contain no representation of texture. Errors introduced by using isotropic material properties to represent a weld are assessed through a quantified error bound in the elastic regime. Lastly, an adaptive error reduction scheme is used to determine the optimal spatial variation of the isotropic weld properties to reduce the error bound.« less

  6. Longitudinal changes in corneal curvature and its relationship to axial length in the Correction of Myopia Evaluation Trial (COMET) cohort.

    PubMed

    Scheiman, Mitchell; Gwiazda, Jane; Zhang, Qinghua; Deng, Li; Fern, Karen; Manny, Ruth E; Weissberg, Erik; Hyman, Leslie

    2016-01-01

    To describe longitudinal changes in corneal curvature (CC) and axial length (AL) over 14 years, and to explore the relationship between AL and CC, and the axial length/corneal radius (AL/CR) ratio. In total 469, 6 to <12-year-old, children were enrolled in COMET. Measurements of refractive error, CC (D), CR (mm), and ocular component dimensions including AL were gathered annually. Linear mixed models were used to evaluate longitudinal changes adjusting for covariates (gender, ethnicity, lens type, baseline age and baseline refraction). The Pearson correlation coefficient between AL and CC was computed at each visit. There was a slight but significant (p<0.0001) flattening in CC over 14 years. At all visits females had significantly steeper CC than males (overall difference=0.53 D, p<0.0001). Caucasians had the steepest CC, and Hispanics the flattest (p=0.001). The correlation between AL and CC was -0.70 (p<0.0001) at baseline (mean age=9.3 years) and decreased to -0.53 (p<0.0001) at the 14-year visit (mean age=24.1 years). The average AL/CR ratio was 3.15 at baseline and increased to 3.31 at the 14-year visit. The correlation between the magnitude of myopia and AL/CR ratio was significantly higher (p<0.0001) at each visit than the correlation between myopia and AL alone. Differences in average corneal curvature by age, gender, and ethnicity observed in early childhood remain consistent as myopia progresses and stabilizes. This study also demonstrates increases in the AL/CR ratio as myopia progresses and then stabilizes, supporting observations from previous cross-sectional data. Copyright © 2015 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.

  7. Postseismic slip of 2011 Tohoku-oki Earthquake across the trench axis through long-term geodetic observations

    NASA Astrophysics Data System (ADS)

    Yamamoto, R.; Hino, R.; Kido, M.; Osada, Y.; Honsho, C.

    2017-12-01

    Since postseismic deformation across 2011 Tohoku-oki Earthquake is strongly affected by viscoelastic relaxation, it is difficult to identify postseismic slip from onshore (e.g. GNSS) and offshore (e.g. GPS-Acoustic: GPS-A) observations. To track postseismic slip directly, we installed acoustic ranging instruments across the axis of the central Japan Trench, off-Miyagi, near the region of large coseismic motion (>50 m) happened during 2011 Tohoku-oki Earthquake.Direct Path Ranging (DPR) measures two-way travel time between a pair of transponders settled on the seafloor. Baseline length can be obtained from calculating travel time and sound velocity which is corrected for time-varying temperature and pressure beforehand. We further made correction for the motion of acoustic elements due to attitude changes of the instruments. Baseline changes can be detected precisely by periodic ranging during observation.We have conducted observations during three times (2013, 2014 - 2015, and 2015 - 2016), and revealed that no significant shortenings across the trench axis took place. It follows that no shallow postseismic slip had occurred off-Miyagi, at least from 2013 to 2016. We examined the accuracy of baseline length measurements and can observed 1.0 ppm (1.0 mm for 1 km baseline) errors, which is small enough. Our results are consistent with the postseismic slip distribution model based on GPS-A observations.Acknowledgements: This research is supported by JSPS KAKENHI (26000002). The installation and recovery of instruments were executed during R/V Kairei (KR13-09; KR15-15), R/V Hakuho-maru (KH-13-05; KH-17-J02), R/V Shinsei-maru (KS-14-17; KS-15-03; KS-16-14).

  8. Prostate Cancer Patient Characteristics Associated With a Strong Preference to Preserve Sexual Function and Receipt of Active Surveillance.

    PubMed

    Broughman, James R; Basak, Ramsankar; Nielsen, Matthew E; Reeve, Bryce B; Usinger, Deborah S; Spearman, Kiayni C; Godley, Paul A; Chen, Ronald C

    2018-04-01

    Men with early-stage prostate cancer have multiple options that have similar oncologic efficacy but vary in terms of their impact on quality of life. In low-risk cancer, active surveillance is the option that best preserves patients' sexual function, but it is unknown if patient preference affects treatment selection. Our objectives were to identify patient characteristics associated with a strong preference to preserve sexual function and to determine whether patient preference and baseline sexual function level are associated with receipt of active surveillance in low-risk cancer. In this population-based cohort of men with localized prostate cancer, baseline patient-reported sexual function was assessed using a validated instrument. Patients were also asked whether preservation of sexual function was very, somewhat, or not important. Prostate cancer disease characteristics and treatments received were abstracted from medical records. A modified Poisson regression model with robust standard errors was used to compute adjusted risk ratio (aRR) estimates. All statistical tests were two-sided. Among 1194 men, 52.6% indicated a strong preference for preserving sexual function. Older men were less likely to have a strong preference (aRR = 0.98 per year, 95% confidence interval [CI] = 0.97 to 0.99), while men with normal sexual function were more likely (vs poor function, aRR = 1.59, 95% CI = 1.39 to 1.82). Among 568 men with low-risk cancer, there was no clear association between baseline sexual function or strong preference to preserve function with receipt of active surveillance. However, strong preference may differnetially impact those with intermediate baseline function vs poor function (Pinteraction = .02). Treatment choice may not always align with patients' preferences. These findings demonstrate opportunities to improve delivery of patient-centered care in early prostate cancer.

  9. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    NASA Astrophysics Data System (ADS)

    Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo

    2017-05-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  10. Effect of rater training on reliability and accuracy of mini-CEX scores: a randomized, controlled trial.

    PubMed

    Cook, David A; Dupras, Denise M; Beckman, Thomas J; Thomas, Kris G; Pankratz, V Shane

    2009-01-01

    Mini-CEX scores assess resident competence. Rater training might improve mini-CEX score interrater reliability, but evidence is lacking. Evaluate a rater training workshop using interrater reliability and accuracy. Randomized trial (immediate versus delayed workshop) and single-group pre/post study (randomized groups combined). Academic medical center. Fifty-two internal medicine clinic preceptors (31 randomized and 21 additional workshop attendees). The workshop included rater error training, performance dimension training, behavioral observation training, and frame of reference training using lecture, video, and facilitated discussion. Delayed group received no intervention until after posttest. Mini-CEX ratings at baseline (just before workshop for workshop group), and four weeks later using videotaped resident-patient encounters; mini-CEX ratings of live resident-patient encounters one year preceding and one year following the workshop; rater confidence using mini-CEX. Among 31 randomized participants, interrater reliabilities in the delayed group (baseline intraclass correlation coefficient [ICC] 0.43, follow-up 0.53) and workshop group (baseline 0.40, follow-up 0.43) were not significantly different (p = 0.19). Mean ratings were similar at baseline (delayed 4.9 [95% confidence interval 4.6-5.2], workshop 4.8 [4.5-5.1]) and follow-up (delayed 5.4 [5.0-5.7], workshop 5.3 [5.0-5.6]; p = 0.88 for interaction). For the entire cohort, rater confidence (1 = not confident, 6 = very confident) improved from mean (SD) 3.8 (1.4) to 4.4 (1.0), p = 0.018. Interrater reliability for ratings of live encounters (entire cohort) was higher after the workshop (ICC 0.34) than before (ICC 0.18) but the standard error of measurement was similar for both periods. Rater training did not improve interrater reliability or accuracy of mini-CEX scores. clinicaltrials.gov identifier NCT00667940

  11. An error covariance model for sea surface topography and velocity derived from TOPEX/POSEIDON altimetry

    NASA Technical Reports Server (NTRS)

    Tsaoussi, Lucia S.; Koblinsky, Chester J.

    1994-01-01

    In order to facilitate the use of satellite-derived sea surface topography and velocity oceanographic models, methodology is presented for deriving the total error covariance and its geographic distribution from TOPEX/POSEIDON measurements. The model is formulated using a parametric model fit to the altimeter range observations. The topography and velocity modeled with spherical harmonic expansions whose coefficients are found through optimal adjustment to the altimeter range residuals using Bayesian statistics. All other parameters, including the orbit, geoid, surface models, and range corrections are provided as unadjusted parameters. The maximum likelihood estimates and errors are derived from the probability density function of the altimeter range residuals conditioned with a priori information. Estimates of model errors for the unadjusted parameters are obtained from the TOPEX/POSEIDON postlaunch verification results and the error covariances for the orbit and the geoid, except for the ocean tides. The error in the ocean tides is modeled, first, as the difference between two global tide models and, second, as the correction to the present tide model, the correction derived from the TOPEX/POSEIDON data. A formal error covariance propagation scheme is used to derive the total error. Our global total error estimate for the TOPEX/POSEIDON topography relative to the geoid for one 10-day period is found tio be 11 cm RMS. When the error in the geoid is removed, thereby providing an estimate of the time dependent error, the uncertainty in the topography is 3.5 cm root mean square (RMS). This level of accuracy is consistent with direct comparisons of TOPEX/POSEIDON altimeter heights with tide gauge measurements at 28 stations. In addition, the error correlation length scales are derived globally in both east-west and north-south directions, which should prove useful for data assimilation. The largest error correlation length scales are found in the tropics. Errors in the velocity field are smallest in midlatitude regions. For both variables the largest errors caused by uncertainty in the geoid. More accurate representations of the geoid await a dedicated geopotential satellite mission. Substantial improvements in the accuracy of ocean tide models are expected in the very near future from research with TOPEX/POSEIDON data.

  12. Evaluation of normalization methods for cDNA microarray data by k-NN classification

    PubMed Central

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J

    2005-01-01

    Background Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Results Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Conclusion Using LOOCV error of k-NNs as the evaluation criterion, three double-bias-removal normalization strategies, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, outperform other strategies for removing spatial effect, intensity effect and scale differences from cDNA microarray data. The apparent sensitivity of k-NN LOOCV classification error to dye biases suggests that this criterion provides an informative measure for evaluating normalization methods. All the computational tools used in this study were implemented using the R language for statistical computing and graphics. PMID:16045803

  13. Evaluation of normalization methods for cDNA microarray data by k-NN classification.

    PubMed

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J

    2005-07-26

    Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Using LOOCV error of k-NNs as the evaluation criterion, three double-bias-removal normalization strategies, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, outperform other strategies for removing spatial effect, intensity effect and scale differences from cDNA microarray data. The apparent sensitivity of k-NN LOOCV classification error to dye biases suggests that this criterion provides an informative measure for evaluating normalization methods. All the computational tools used in this study were implemented using the R language for statistical computing and graphics.

  14. Evaluation Of Statistical Models For Forecast Errors From The HBV-Model

    NASA Astrophysics Data System (ADS)

    Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.

    2009-04-01

    Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.

  15. Stripe-PZT Sensor-Based Baseline-Free Crack Diagnosis in a Structure with a Welded Stiffener.

    PubMed

    An, Yun-Kyu; Shen, Zhiqi; Wu, Zhishen

    2016-09-16

    This paper proposes a stripe-PZT sensor-based baseline-free crack diagnosis technique in the heat affected zone (HAZ) of a structure with a welded stiffener. The proposed technique enables one to identify and localize a crack in the HAZ using only current data measured using a stripe-PZT sensor. The use of the stripe-PZT sensor makes it possible to significantly improve the applicability to real structures and minimize man-made errors associated with the installation process by embedding multiple piezoelectric sensors onto a printed circuit board. Moreover, a new frequency-wavenumber analysis-based baseline-free crack diagnosis algorithm minimizes false alarms caused by environmental variations by avoiding simple comparison with the baseline data accumulated from the pristine condition of a target structure. The proposed technique is numerically as well as experimentally validated using a plate-like structure with a welded stiffener, reveling that it successfully identifies and localizes a crack in HAZ.

  16. Efficient algorithm for baseline wander and powerline noise removal from ECG signals based on discrete Fourier series.

    PubMed

    Bahaz, Mohamed; Benzid, Redha

    2018-03-01

    Electrocardiogram (ECG) signals are often contaminated with artefacts and noises which can lead to incorrect diagnosis when they are visually inspected by cardiologists. In this paper, the well-known discrete Fourier series (DFS) is re-explored and an efficient DFS-based method is proposed to reduce contribution of both baseline wander (BW) and powerline interference (PLI) noises in ECG records. In the first step, the determination of the exact number of low frequency harmonics contributing in BW is achieved. Next, the baseline drift is estimated by the sum of all associated Fourier sinusoids components. Then, the baseline shift is discarded efficiently by a subtraction of its approximated version from the original biased ECG signal. Concerning the PLI, the subtraction of the contributing harmonics calculated in the same manner reduces efficiently such type of noise. In addition of visual quality results, the proposed algorithm shows superior performance in terms of higher signal-to-noise ratio and smaller mean square error when faced to the DCT-based algorithm.

  17. Stripe-PZT Sensor-Based Baseline-Free Crack Diagnosis in a Structure with a Welded Stiffener

    PubMed Central

    An, Yun-Kyu; Shen, Zhiqi; Wu, Zhishen

    2016-01-01

    This paper proposes a stripe-PZT sensor-based baseline-free crack diagnosis technique in the heat affected zone (HAZ) of a structure with a welded stiffener. The proposed technique enables one to identify and localize a crack in the HAZ using only current data measured using a stripe-PZT sensor. The use of the stripe-PZT sensor makes it possible to significantly improve the applicability to real structures and minimize man-made errors associated with the installation process by embedding multiple piezoelectric sensors onto a printed circuit board. Moreover, a new frequency-wavenumber analysis-based baseline-free crack diagnosis algorithm minimizes false alarms caused by environmental variations by avoiding simple comparison with the baseline data accumulated from the pristine condition of a target structure. The proposed technique is numerically as well as experimentally validated using a plate-like structure with a welded stiffener, reveling that it successfully identifies and localizes a crack in HAZ. PMID:27649200

  18. Role-modeling and medical error disclosure: a national survey of trainees.

    PubMed

    Martinez, William; Hickson, Gerald B; Miller, Bonnie M; Doukas, David J; Buckley, John D; Song, John; Sehgal, Niraj L; Deitz, Jennifer; Braddock, Clarence H; Lehmann, Lisa Soleymani

    2014-03-01

    To measure trainees' exposure to negative and positive role-modeling for responding to medical errors and to examine the association between that exposure and trainees' attitudes and behaviors regarding error disclosure. Between May 2011 and June 2012, 435 residents at two large academic medical centers and 1,187 medical students from seven U.S. medical schools received anonymous, electronic questionnaires. The questionnaire asked respondents about (1) experiences with errors, (2) training for responding to errors, (3) behaviors related to error disclosure, (4) exposure to role-modeling for responding to errors, and (5) attitudes regarding disclosure. Using multivariate regression, the authors analyzed whether frequency of exposure to negative and positive role-modeling independently predicted two primary outcomes: (1) attitudes regarding disclosure and (2) nontransparent behavior in response to a harmful error. The response rate was 55% (884/1,622). Training on how to respond to errors had the largest independent, positive effect on attitudes (standardized effect estimate, 0.32, P < .001); negative role-modeling had the largest independent, negative effect (standardized effect estimate, -0.26, P < .001). Positive role-modeling had a positive effect on attitudes (standardized effect estimate, 0.26, P < .001). Exposure to negative role-modeling was independently associated with an increased likelihood of trainees' nontransparent behavior in response to an error (OR 1.37, 95% CI 1.15-1.64; P < .001). Exposure to role-modeling predicts trainees' attitudes and behavior regarding the disclosure of harmful errors. Negative role models may be a significant impediment to disclosure among trainees.

  19. Optimized Finite-Difference Coefficients for Hydroacoustic Modeling

    NASA Astrophysics Data System (ADS)

    Preston, L. A.

    2014-12-01

    Responsible utilization of marine renewable energy sources through the use of current energy converter (CEC) and wave energy converter (WEC) devices requires an understanding of the noise generation and propagation from these systems in the marine environment. Acoustic noise produced by rotating turbines, for example, could adversely affect marine animals and human-related marine activities if not properly understood and mitigated. We are utilizing a 3-D finite-difference acoustic simulation code developed at Sandia that can accurately propagate noise in the complex bathymetry in the near-shore to open ocean environment. As part of our efforts to improve computation efficiency in the large, high-resolution domains required in this project, we investigate the effects of using optimized finite-difference coefficients on the accuracy of the simulations. We compare accuracy and runtime of various finite-difference coefficients optimized via criteria such as maximum numerical phase speed error, maximum numerical group speed error, and L-1 and L-2 norms of weighted numerical group and phase speed errors over a given spectral bandwidth. We find that those coefficients optimized for L-1 and L-2 norms are superior in accuracy to those based on maximal error and can produce runtimes of 10% of the baseline case, which uses Taylor Series finite-difference coefficients at the Courant time step limit. We will present comparisons of the results for the various cases evaluated as well as recommendations for utilization of the cases studied. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  20. Development of Predictive Energy Management Strategies for Hybrid Electric Vehicles

    NASA Astrophysics Data System (ADS)

    Baker, David

    Studies have shown that obtaining and utilizing information about the future state of vehicles can improve vehicle fuel economy (FE). However, there has been a lack of research into the impact of real-world prediction error on FE improvements, and whether near-term technologies can be utilized to improve FE. This study seeks to research the effect of prediction error on FE. First, a speed prediction method is developed, and trained with real-world driving data gathered only from the subject vehicle (a local data collection method). This speed prediction method informs a predictive powertrain controller to determine the optimal engine operation for various prediction durations. The optimal engine operation is input into a high-fidelity model of the FE of a Toyota Prius. A tradeoff analysis between prediction duration and prediction fidelity was completed to determine what duration of prediction resulted in the largest FE improvement. Results demonstrate that 60-90 second predictions resulted in the highest FE improvement over the baseline, achieving up to a 4.8% FE increase. A second speed prediction method utilizing simulated vehicle-to-vehicle (V2V) communication was developed to understand if incorporating near-term technologies could be utilized to further improve prediction fidelity. This prediction method produced lower variation in speed prediction error, and was able to realize a larger FE improvement over the local prediction method for longer prediction durations, achieving up to 6% FE improvement. This study concludes that speed prediction and prediction-informed optimal vehicle energy management can produce FE improvements with real-world prediction error and drive cycle variability, as up to 85% of the FE benefit of perfect speed prediction was achieved with the proposed prediction methods.

Top