Sample records for valid estimates based

  1. Temporal validation for landsat-based volume estimation model

    Treesearch

    Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan

    2015-01-01

    Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...

  2. The validity and reproducibility of food-frequency questionnaire-based total antioxidant capacity estimates in Swedish women.

    PubMed

    Rautiainen, Susanne; Serafini, Mauro; Morgenstern, Ralf; Prior, Ronald L; Wolk, Alicja

    2008-05-01

    Total antioxidant capacity (TAC) provides an assessment of antioxidant activity and synergistic interactions of redox molecules in foods and plasma. We investigated the validity and reproducibility of food-frequency questionnaire (FFQ)-based TAC estimates assessed by oxygen radical absorbance capacity (ORAC), total radical-trapping antioxidant parameters (TRAP), and ferric-reducing antioxidant power (FRAP) food values. Validity and reproducibility were evaluated in 2 random samples from the Swedish Mammography Cohort. Validity was studied by comparing FFQ-based TAC estimates with one measurement of plasma TAC in 108 women (54-73-y-old dietary supplement nonusers). Reproducibility was studied in 300 women (56-75 y old, 50.7% dietary supplement nonusers) who completed 2 FFQs 1 y apart. Fruit and vegetables (mainly apples, pears, oranges, and berries) were the major contributors to FFQ-based ORAC (56.5%), TRAP (41.7%), and FRAP (38.0%) estimates. In the validity study, whole plasma ORAC was correlated (Pearson) with FFQ-based ORAC (r = 0.35), TRAP (r = 0.31), and FRAP (r = 0.28) estimates from fruit and vegetables. Correlations between lipophilic plasma ORAC and FFQ-based ORAC, TRAP, and FRAP estimates from fruit and vegetables were 0.41, 0.31, and 0.28, and correlations with plasma TRAP estimates were 0.31, 0.30, and 0.28, respectively. Hydrophilic plasma ORAC and plasma FRAP values did not correlate with FFQ-based TAC estimates. Reproducibility, assessed by intraclass correlations, was 0.60, 0.61, and 0.61 for FFQ-based ORAC, TRAP, and FRAP estimates, respectively, from fruit and vegetables. FFQ-based TAC values represent valid and reproducible estimates that may be used in nutritional epidemiology to assess antioxidant intake from foods. Further studies in other populations to confirm these results are needed.

  3. Validation of Ground-based Optical Estimates of Auroral Electron Precipitation Energy Deposition

    NASA Astrophysics Data System (ADS)

    Hampton, D. L.; Grubbs, G. A., II; Conde, M.; Lynch, K. A.; Michell, R.; Zettergren, M. D.; Samara, M.; Ahrns, M. J.

    2017-12-01

    One of the major energy inputs into the high latitude ionosphere and mesosphere is auroral electron precipitation. Not only does the kinetic energy get deposited, the ensuing ionization in the E and F-region ionosphere modulates parallel and horizontal currents that can dissipate in the form of Joule heating. Global models to simulate these interactions typically use electron precipitation models that produce a poor representation of the spatial and temporal complexity of auroral activity as observed from the ground. This is largely due to these precipitation models being based on averages of multiple satellite overpasses separated by periods much longer than typical auroral feature durations. With the development of regional and continental observing networks (e.g. THEMIS ASI), the possibility of ground-based optical observations producing quantitative estimates of energy deposition with temporal and spatial scales comparable to those known to be exhibited in auroral activity become a real possibility. Like empirical precipitation models based on satellite overpasses such optics-based estimates are subject to assumptions and uncertainties, and therefore require validation. Three recent sounding rocket missions offer such an opportunity. The MICA (2012), GREECE (2014) and Isinglass (2017) missions involved detailed ground based observations of auroral arcs simultaneously with extensive on-board instrumentation. These have afforded an opportunity to examine the results of three optical methods of determining auroral electron energy flux, namely 1) ratio of auroral emissions, 2) green line temperature vs. emission altitude, and 3) parametric estimates using white-light images. We present comparisons from all three methods for all three missions and summarize the temporal and spatial scales and coverage over which each is valid.

  4. An improved procedure for the validation of satellite-based precipitation estimates

    NASA Astrophysics Data System (ADS)

    Tang, Ling; Tian, Yudong; Yan, Fang; Habib, Emad

    2015-09-01

    The objective of this study is to propose and test a new procedure to improve the validation of remote-sensing, high-resolution precipitation estimates. Our recent studies show that many conventional validation measures do not accurately capture the unique error characteristics in precipitation estimates to better inform both data producers and users. The proposed new validation procedure has two steps: 1) an error decomposition approach to separate the total retrieval error into three independent components: hit error, false precipitation and missed precipitation; and 2) the hit error is further analyzed based on a multiplicative error model. In the multiplicative error model, the error features are captured by three model parameters. In this way, the multiplicative error model separates systematic and random errors, leading to more accurate quantification of the uncertainties. The proposed procedure is used to quantitatively evaluate the recent two versions (Version 6 and 7) of TRMM's Multi-sensor Precipitation Analysis (TMPA) real-time and research product suite (3B42 and 3B42RT) for seven years (2005-2011) over the continental United States (CONUS). The gauge-based National Centers for Environmental Prediction (NCEP) Climate Prediction Center (CPC) near-real-time daily precipitation analysis is used as the reference. In addition, the radar-based NCEP Stage IV precipitation data are also model-fitted to verify the effectiveness of the multiplicative error model. The results show that winter total bias is dominated by the missed precipitation over the west coastal areas and the Rocky Mountains, and the false precipitation over large areas in Midwest. The summer total bias is largely coming from the hit bias in Central US. Meanwhile, the new version (V7) tends to produce more rainfall in the higher rain rates, which moderates the significant underestimation exhibited in the previous V6 products. Moreover, the error analysis from the multiplicative error model

  5. SU-E-T-129: Are Knowledge-Based Planning Dose Estimates Valid for Distensible Organs?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lalonde, R; Heron, D; Huq, M

    2015-06-15

    Purpose: Knowledge-based planning programs have become available to assist treatment planning in radiation therapy. Such programs can be used to generate estimated DVHs and planning constraints for organs at risk (OARs), based upon a model generated from previous plans. These estimates are based upon the planning CT scan. However, for distensible OARs like the bladder and rectum, daily variations in volume may make the dose estimates invalid. The purpose of this study is to determine whether knowledge-based DVH dose estimates may be valid for distensible OARs. Methods: The Varian RapidPlan™ knowledge-based planning module was used to generate OAR dose estimatesmore » and planning objectives for 10 prostate cases previously planned with VMAT, and final plans were calculated for each. Five weekly setup CBCT scans of each patient were then downloaded and contoured (assuming no change in size and shape of the target volume), and rectum and bladder DVHs were recalculated for each scan. Dose volumes were then compared at 75, 60,and 40 Gy for the bladder and rectum between the planning scan and the CBCTs. Results: Plan doses and estimates matched well at all dose points., Volumes of the rectum and bladder varied widely between planning CT and the CBCTs, ranging from 0.46 to 2.42 for the bladder and 0.71 to 2.18 for the rectum, causing relative dose volumes to vary between planning CT and CBCT, but absolute dose volumes were more consistent. The overall ratio of CBCT/plan dose volumes was 1.02 ±0.27 for rectum and 0.98 ±0.20 for bladder in these patients. Conclusion: Knowledge-based planning dose volume estimates for distensible OARs are still valid, in absolute volume terms, between treatment planning scans and CBCT’s taken during daily treatment. Further analysis of the data is being undertaken to determine how differences depend upon rectum and bladder filling state. This work has been supported by Varian Medical Systems.« less

  6. Trunk-acceleration based assessment of gait parameters in older persons: a comparison of reliability and validity of four inverted pendulum based estimations.

    PubMed

    Zijlstra, Agnes; Zijlstra, Wiebren

    2013-09-01

    Inverted pendulum (IP) models of human walking allow for wearable motion-sensor based estimations of spatio-temporal gait parameters during unconstrained walking in daily-life conditions. At present it is unclear to what extent different IP based estimations yield different results, and reliability and validity have not been investigated in older persons without a specific medical condition. The aim of this study was to compare reliability and validity of four different IP based estimations of mean step length in independent-living older persons. Participants were assessed twice and walked at different speeds while wearing a tri-axial accelerometer at the lower back. For all step-length estimators, test-retest intra-class correlations approached or were above 0.90. Intra-class correlations with reference step length were above 0.92 with a mean error of 0.0 cm when (1) multiplying the estimated center-of-mass displacement during a step by an individual correction factor in a simple IP model, or (2) adding an individual constant for bipedal stance displacement to the estimated displacement during single stance in a 2-phase IP model. When applying generic corrections or constants in all subjects (i.e. multiplication by 1.25, or adding 75% of foot length), correlations were above 0.75 with a mean error of respectively 2.0 and 1.2 cm. Although the results indicate that an individual adjustment of the IP models provides better estimations of mean step length, the ease of a generic adjustment can be favored when merely evaluating intra-individual differences. Further studies should determine the validity of these IP based estimations for assessing gait in daily life. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Low-cost extrapolation method for maximal LTE radio base station exposure estimation: test and validation.

    PubMed

    Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc

    2013-06-01

    An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.

  8. Online cross-validation-based ensemble learning.

    PubMed

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2018-01-30

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Online Cross-Validation-Based Ensemble Learning

    PubMed Central

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2017-01-01

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. PMID:28474419

  10. VALIDATION OF A METHOD FOR ESTIMATING LONG-TERM EXPOSURES BASED ON SHORT-TERM MEASUREMENTS

    EPA Science Inventory

    A method for estimating long-term exposures from short-term measurements is validated using data from a recent EPA study of exposure to fine particles. The method was developed a decade ago but data to validate it did not exist until recently. In this paper, data from repeated ...

  11. VALIDATION OF A METHOD FOR ESTIMATING LONG-TERM EXPOSURES BASED ON SHORT-TERM MEASUREMENTS

    EPA Science Inventory

    A method for estimating long-term exposures from short-term measurements is validated using data from a recent EPA study of exposure to fine particles. The method was developed a decade ago but long-term exposure data to validate it did not exist until recently. In this paper, ...

  12. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates.

    PubMed

    LeDell, Erin; Petersen, Maya; van der Laan, Mark

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.

  13. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates

    PubMed Central

    Petersen, Maya; van der Laan, Mark

    2015-01-01

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737

  14. Evolving Improvements to TRMM Ground Validation Rainfall Estimates

    NASA Technical Reports Server (NTRS)

    Robinson, M.; Kulie, M. S.; Marks, D. A.; Wolff, D. B.; Ferrier, B. S.; Amitai, E.; Silberstein, D. S.; Fisher, B. L.; Wang, J.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The primary function of the TRMM Ground Validation (GV) Program is to create GV rainfall products that provide basic validation of satellite-derived precipitation measurements for select primary sites. Since the successful 1997 launch of the TRMM satellite, GV rainfall estimates have demonstrated systematic improvements directly related to improved radar and rain gauge data, modified science techniques, and software revisions. Improved rainfall estimates have resulted in higher quality GV rainfall products and subsequently, much improved evaluation products for the satellite-based precipitation estimates from TRMM. This presentation will demonstrate how TRMM GV rainfall products created in a semi-automated, operational environment have evolved and improved through successive generations. Monthly rainfall maps and rainfall accumulation statistics for each primary site will be presented for each stage of GV product development. Contributions from individual product modifications involving radar reflectivity (Ze)-rain rate (R) relationship refinements, improvements in rain gauge bulk-adjustment and data quality control processes, and improved radar and gauge data will be discussed. Finally, it will be demonstrated that as GV rainfall products have improved, rainfall estimation comparisons between GV and satellite have converged, lending confidence to the satellite-derived precipitation measurements from TRMM.

  15. An object-based approach for areal rainfall estimation and validation of atmospheric models

    NASA Astrophysics Data System (ADS)

    Troemel, Silke; Simmer, Clemens

    2010-05-01

    efficiency, and the possibility to classify continental and maritime systems using the effective efficiency confirm the informative value of the qualified descriptors. The IRVDs especially correct for the underestimation in case of intense rain events, and the information content of descriptors is most likely higher than demonstrated so far. We used quite sparse information about meteorological variables needed for the calculation of some IRVDs from single radiosoundings, and several descriptors suffered from the range-dependent vertical resolution of the reflectivity profile. Inclusion of neighbouring radars and assimilation runs of weather forecasting models will further enhance the accuracy of rainfall estimates. Finally, the clear difference between the IRVD selection from the pseudo-radar data and from the real world data hint to a new object-based avenue for the validation of higher resolution atmospheric models and for evaluating their potential to digest radar observations in data assimilation schemes.

  16. Validating a new methodology for strain estimation from cardiac cine MRI

    NASA Astrophysics Data System (ADS)

    Elnakib, Ahmed; Beache, Garth M.; Gimel'farb, Georgy; Inanc, Tamer; El-Baz, Ayman

    2013-10-01

    This paper focuses on validating a novel framework for estimating the functional strain from cine cardiac magnetic resonance imaging (CMRI). The framework consists of three processing steps. First, the left ventricle (LV) wall borders are segmented using a level-set based deformable model. Second, the points on the wall borders are tracked during the cardiac cycle based on solving the Laplace equation between the LV edges. Finally, the circumferential and radial strains are estimated at the inner, mid-wall, and outer borders of the LV wall. The proposed framework is validated using synthetic phantoms of the material strains that account for the physiological features and the LV response during the cardiac cycle. Experimental results on simulated phantom images confirm the accuracy and robustness of our method.

  17. Criterion-Related Validity of the Distance- and Time-Based Walk/Run Field Tests for Estimating Cardiorespiratory Fitness: A Systematic Review and Meta-Analysis

    PubMed Central

    Mayorga-Vega, Daniel; Bocanegra-Parrilla, Raúl; Ornelas, Martha; Viciana, Jesús

    2016-01-01

    Objectives The main purpose of the present meta-analysis was to examine the criterion-related validity of the distance- and time-based walk/run tests for estimating cardiorespiratory fitness among apparently healthy children and adults. Materials and Methods Relevant studies were searched from seven electronic bibliographic databases up to August 2015 and through other sources. The Hunter-Schmidt’s psychometric meta-analysis approach was conducted to estimate the population criterion-related validity of the following walk/run tests: 5,000 m, 3 miles, 2 miles, 3,000 m, 1.5 miles, 1 mile, 1,000 m, ½ mile, 600 m, 600 yd, ¼ mile, 15 min, 12 min, 9 min, and 6 min. Results From the 123 included studies, a total of 200 correlation values were analyzed. The overall results showed that the criterion-related validity of the walk/run tests for estimating maximum oxygen uptake ranged from low to moderate (rp = 0.42–0.79), with the 1.5 mile (rp = 0.79, 0.73–0.85) and 12 min walk/run tests (rp = 0.78, 0.72–0.83) having the higher criterion-related validity for distance- and time-based field tests, respectively. The present meta-analysis also showed that sex, age and maximum oxygen uptake level do not seem to affect the criterion-related validity of the walk/run tests. Conclusions When the evaluation of an individual’s maximum oxygen uptake attained during a laboratory test is not feasible, the 1.5 mile and 12 min walk/run tests represent useful alternatives for estimating cardiorespiratory fitness. As in the assessment with any physical fitness field test, evaluators must be aware that the performance score of the walk/run field tests is simply an estimation and not a direct measure of cardiorespiratory fitness. PMID:26987118

  18. Criterion-Related Validity of the Distance- and Time-Based Walk/Run Field Tests for Estimating Cardiorespiratory Fitness: A Systematic Review and Meta-Analysis.

    PubMed

    Mayorga-Vega, Daniel; Bocanegra-Parrilla, Raúl; Ornelas, Martha; Viciana, Jesús

    2016-01-01

    The main purpose of the present meta-analysis was to examine the criterion-related validity of the distance- and time-based walk/run tests for estimating cardiorespiratory fitness among apparently healthy children and adults. Relevant studies were searched from seven electronic bibliographic databases up to August 2015 and through other sources. The Hunter-Schmidt's psychometric meta-analysis approach was conducted to estimate the population criterion-related validity of the following walk/run tests: 5,000 m, 3 miles, 2 miles, 3,000 m, 1.5 miles, 1 mile, 1,000 m, ½ mile, 600 m, 600 yd, ¼ mile, 15 min, 12 min, 9 min, and 6 min. From the 123 included studies, a total of 200 correlation values were analyzed. The overall results showed that the criterion-related validity of the walk/run tests for estimating maximum oxygen uptake ranged from low to moderate (rp = 0.42-0.79), with the 1.5 mile (rp = 0.79, 0.73-0.85) and 12 min walk/run tests (rp = 0.78, 0.72-0.83) having the higher criterion-related validity for distance- and time-based field tests, respectively. The present meta-analysis also showed that sex, age and maximum oxygen uptake level do not seem to affect the criterion-related validity of the walk/run tests. When the evaluation of an individual's maximum oxygen uptake attained during a laboratory test is not feasible, the 1.5 mile and 12 min walk/run tests represent useful alternatives for estimating cardiorespiratory fitness. As in the assessment with any physical fitness field test, evaluators must be aware that the performance score of the walk/run field tests is simply an estimation and not a direct measure of cardiorespiratory fitness.

  19. Validation of thigh-based accelerometer estimates of postural allocation in 5-12 year-olds.

    PubMed

    van Loo, Christiana M T; Okely, Anthony D; Batterham, Marijka J; Hinkley, Trina; Ekelund, Ulf; Brage, Søren; Reilly, John J; Jones, Rachel A; Janssen, Xanne; Cliff, Dylan P

    2017-03-01

    To validate activPAL3™ (AP3) for classifying postural allocation, estimating time spent in postures and examining the number of breaks in sedentary behaviour (SB) in 5-12 year-olds. Laboratory-based validation study. Fifty-seven children completed 15 sedentary, light- and moderate-to-vigorous intensity activities. Direct observation (DO) was used as the criterion measure. The accuracy of AP3 was examined using a confusion matrix, equivalence testing, Bland-Altman procedures and a paired t-test for 5-8y and 9-12y. Sensitivity of AP3 was 86.8%, 82.5% and 85.3% for sitting/lying, standing, and stepping, respectively, in 5-8y and 95.3%, 81.5% and 85.1%, respectively, in 9-12y. Time estimates of AP3 were equivalent to DO for sitting/lying in 9-12y and stepping in all ages, but not for sitting/lying in 5-12y and standing in all ages. Underestimation of sitting/lying time was smaller in 9-12y (1.4%, limits of agreement [LoA]: -13.8 to 11.1%) compared to 5-8y (12.6%, LoA: -39.8 to 14.7%). Underestimation for stepping time was small (5-8y: 6.5%, LoA: -18.3 to 5.3%; 9-12y: 7.6%, LoA: -16.8 to 1.6%). Considerable overestimation was found for standing (5-8y: 36.8%, LoA: -16.3 to 89.8%; 9-12y: 19.3%, LoA: -1.6 to 36.9%). SB breaks were significantly overestimated (5-8y: 53.2%, 9-12y: 28.3%, p<0.001). AP3 showed acceptable accuracy for classifying postures, however estimates of time spent standing were consistently overestimated and individual error was considerable. Estimates of sitting/lying were more accurate for 9-12y. Stepping time was accurately estimated for all ages. SB breaks were significantly overestimated, although the absolute difference was larger in 5-8y. Surveillance applications of AP3 would be acceptable, however, individual level applications might be less accurate. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  20. Qualitative Research Findings: What Do We Do to Improve and Estimate Their Validity?

    ERIC Educational Resources Information Center

    Dawson, Judith A.

    This paper is based on the premise that relatively little is known about how to improve validity in qualitative research and less is known about how to estimate validity in studies conducted by others. The purpose of the study was to describe the conceptualization of validity in qualitative inquiry to determine how it was used by the author of a…

  1. Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries

    NASA Astrophysics Data System (ADS)

    Perez, Hector Eduardo

    This dissertation focuses on developing and experimentally validating model based control techniques to enhance the operation of lithium ion batteries, safely. An overview of the contributions to address the challenges that arise are provided below. Chapter 1: This chapter provides an introduction to battery fundamentals, models, and control and estimation techniques. Additionally, it provides motivation for the contributions of this dissertation. Chapter 2: This chapter examines reference governor (RG) methods for satisfying state constraints in Li-ion batteries. Mathematically, these constraints are formulated from a first principles electrochemical model. Consequently, the constraints explicitly model specific degradation mechanisms, such as lithium plating, lithium depletion, and overheating. This contrasts with the present paradigm of limiting measured voltage, current, and/or temperature. The critical challenges, however, are that (i) the electrochemical states evolve according to a system of nonlinear partial differential equations, and (ii) the states are not physically measurable. Assuming available state and parameter estimates, this chapter develops RGs for electrochemical battery models. The results demonstrate how electrochemical model state information can be utilized to ensure safe operation, while simultaneously enhancing energy capacity, power, and charge speeds in Li-ion batteries. Chapter 3: Complex multi-partial differential equation (PDE) electrochemical battery models are characterized by parameters that are often difficult to measure or identify. This parametric uncertainty influences the state estimates of electrochemical model-based observers for applications such as state-of-charge (SOC) estimation. This chapter develops two sensitivity-based interval observers that map bounded parameter uncertainty to state estimation intervals, within the context of electrochemical PDE models and SOC estimation. Theoretically, this chapter extends the

  2. A semi-automatic method for left ventricle volume estimate: an in vivo validation study

    NASA Technical Reports Server (NTRS)

    Corsi, C.; Lamberti, C.; Sarti, A.; Saracino, G.; Shiota, T.; Thomas, J. D.

    2001-01-01

    This study aims to the validation of the left ventricular (LV) volume estimates obtained by processing volumetric data utilizing a segmentation model based on level set technique. The validation has been performed by comparing real-time volumetric echo data (RT3DE) and magnetic resonance (MRI) data. A validation protocol has been defined. The validation protocol was applied to twenty-four estimates (range 61-467 ml) obtained from normal and pathologic subjects, which underwent both RT3DE and MRI. A statistical analysis was performed on each estimate and on clinical parameters as stroke volume (SV) and ejection fraction (EF). Assuming MRI estimates (x) as a reference, an excellent correlation was found with volume measured by utilizing the segmentation procedure (y) (y=0.89x + 13.78, r=0.98). The mean error on SV was 8 ml and the mean error on EF was 2%. This study demonstrated that the segmentation technique is reliably applicable on human hearts in clinical practice.

  3. Evaluating Cardiovascular Health Disparities Using Estimated Race/Ethnicity: A Validation Study.

    PubMed

    Bykov, Katsiaryna; Franklin, Jessica M; Toscano, Michele; Rawlins, Wayne; Spettell, Claire M; McMahill-Walraven, Cheryl N; Shrank, William H; Choudhry, Niteesh K

    2015-12-01

    Methods of estimating race/ethnicity using administrative data are increasingly used to examine and target disparities; however, there has been no validation of these methods using clinically relevant outcomes. To evaluate the validity of the indirect method of race/ethnicity identification based on place of residence and surname for assessing clinically relevant outcomes. A total of 2387 participants in the Post-MI Free Rx Event and Economic Evaluation (MI FREEE) trial who had both self-reported and Bayesian Improved Surname Geocoding method (BISG)-estimated race/ethnicity information available. We used tests of interaction to compare differences in the effect of providing full drug coverage for post-MI medications on adherence and rates of major vascular events or revascularization for white and nonwhite patients based upon self-reported and indirect racial/ethnic assignment. The impact of full coverage on clinical events differed substantially when based upon self-identified race (HR=0.97 for whites, HR=0.65 for nonwhites; interaction P-value=0.05); however, it did not differ among race/ethnicity groups classified using indirect methods (HR=0.87 for white and nonwhites; interaction P-value=0.83). The impact on adherence was the same for self-reported and BISG-estimated race/ethnicity for 2 of the 3 medication classes studied. Quantitatively and qualitatively different results were obtained when indirectly estimated race/ethnicity was used, suggesting that these techniques may not accurately describe aspects of race/ethnicity related to actual health behaviors.

  4. Validity and practicability of smartphone-based photographic food records for estimating energy and nutrient intake.

    PubMed

    Kong, Kaimeng; Zhang, Lulu; Huang, Lisu; Tao, Yexuan

    2017-05-01

    Image-assisted dietary assessment methods are frequently used to record individual eating habits. This study tested the validity of a smartphone-based photographic food recording approach by comparing the results obtained with those of a weighed food record. We also assessed the practicality of the method by using it to measure the energy and nutrient intake of college students. The experiment was implemented in two phases, each lasting 2 weeks. In the first phase, a labelled menu and a photograph database were constructed. The energy and nutrient content of 31 randomly selected dishes in three different portion sizes were then estimated by the photograph-based method and compared with a weighed food record. In the second phase, we combined the smartphone-based photographic method with the WeChat smartphone application and applied this to 120 randomly selected participants to record their energy and nutrient intake. The Pearson correlation coefficients for energy, protein, fat, and carbohydrate content between the weighed and the photographic food record were 0.997, 0.936, 0.996, and 0.999, respectively. Bland-Altman plots showed good agreement between the two methods. The estimated protein, fat, and carbohydrate intake by participants was in accordance with values in the Chinese Residents' Nutrition and Chronic Disease report (2015). Participants expressed satisfaction with the new approach and the compliance rate was 97.5%. The smartphone-based photographic dietary assessment method combined with the WeChat instant messaging application was effective and practical for use by young people.

  5. Global cost of child survival: estimates from country-level validation

    PubMed Central

    van Ekdom, Liselore; Scherpbier, Robert W; Niessen, Louis W

    2011-01-01

    Abstract Objective To cross-validate the global cost of scaling up child survival interventions to achieve the fourth Millennium Development Goal (MDG4) as estimated by the World Health Organization (WHO) in 2007 by using the latest country-provided data and new assumptions. Methods After the main cost categories for each country were identified, validation questionnaires were sent to 32 countries with high child mortality. Publicly available estimates for disease incidence, intervention coverage, prices and resources for individual-level and programme-level activities were validated against local data. Nine updates to the 2007 WHO model were generated using revised assumptions. Finally, estimates were extrapolated to 75 countries and combined with cost estimates for immunization and malaria programmes and for programmes for the prevention of mother-to-child transmission of the human immunodeficiency virus (HIV). Findings Twenty-six countries responded. Adjustments were largest for system- and programme-level data and smallest for patient data. Country-level validation caused a 53% increase in original cost estimates (i.e. 9 billion 2004 United States dollars [US$]) for 26 countries owing to revised system and programme assumptions, especially surrounding community health worker costs. The additional effect of updated population figures was small; updated epidemiologic figures increased costs by US$ 4 billion (+15%). New unit prices in the 26 countries that provided data increased estimates by US$ 4.3 billion (+16%). Extrapolation to 75 countries increased the original price estimate by US$ 33 billion (+80%) for 2010–2015. Conclusion Country-level validation had a significant effect on the cost estimate. Price adaptations and programme-related assumptions contributed substantially. An additional 74 billion US$ 2005 (representing a 12% increase in total health expenditure) would be needed between 2010 and 2015. Given resource constraints, countries will need to

  6. Estimating and validating ground-based timber harvesting production through computer simulation

    Treesearch

    Jingxin Wang; Chris B. LeDoux

    2003-01-01

    Estimating ground-based timber harvesting systems production with an object oriented methodology was investigated. The estimation model developed generates stands of trees, simulates chain saw, drive-to-tree feller-buncher, swing-to-tree single-grip harvester felling, and grapple skidder and forwarder extraction activities, and analyzes costs and productivity. It also...

  7. The validity of ultrasound estimation of muscle volumes.

    PubMed

    Infantolino, Benjamin W; Gales, Daniel J; Winter, Samantha L; Challis, John H

    2007-08-01

    The purpose of this study was to validate ultrasound muscle volume estimation in vivo. To examine validity, vastus lateralis ultrasound images were collected from cadavers before muscle dissection; after dissection, the volumes were determined by hydrostatic weighing. Seven thighs from cadaver specimens were scanned using a 7.5-MHz ultrasound probe (SSD-1000, Aloka, Japan). The perimeter of the vastus lateralis was identified in the ultrasound images and manually digitized. Volumes were then estimated using the Cavalieri principle, by measuring the image areas of sets of parallel two-dimensional slices through the muscles. The muscles were then dissected from the cadavers, and muscle volume was determined via hydrostatic weighing. There was no statistically significant difference between the ultrasound estimation of muscle volume and that estimated using hydrostatic weighing (p > 0.05). The mean percentage error between the two volume estimates was 0.4% +/- 6.9. Three operators all performed four digitizations of all images from one randomly selected muscle; there was no statistical difference between operators or trials and the intraclass correlation was high (>0.8). The results of this study indicate that ultrasound is an accurate method for estimating muscle volumes in vivo.

  8. Validity of eyeball estimation for range of motion during the cervical flexion rotation test compared to an ultrasound-based movement analysis system.

    PubMed

    Schäfer, Axel; Lüdtke, Kerstin; Breuel, Franziska; Gerloff, Nikolas; Knust, Maren; Kollitsch, Christian; Laukart, Alex; Matej, Laura; Müller, Antje; Schöttker-Königer, Thomas; Hall, Toby

    2018-08-01

    Headache is a common and costly health problem. Although pathogenesis of headache is heterogeneous, one reported contributing factor is dysfunction of the upper cervical spine. The flexion rotation test (FRT) is a commonly used diagnostic test to detect upper cervical movement impairment. The aim of this cross-sectional study was to investigate concurrent validity of detecting high cervical ROM impairment during the FRT by comparing measurements established by an ultrasound-based system (gold standard) with eyeball estimation. Secondary aim was to investigate intra-rater reliability of FRT ROM eyeball estimation. The examiner (6 years experience) was blinded to the data from the ultrasound-based device and to the symptoms of the patients. FRT test result (positive or negative) was based on visual estimation of range of rotation less than 34° to either side. Concurrently, range of rotation was evaluated using the ultrasound-based device. A total of 43 subjects with headache (79% female), mean age of 35.05 years (SD 13.26) were included. According to the International Headache Society Classification 23 subjects had migraine, 4 tension type headache, and 16 multiple headache forms. Sensitivity and specificity were 0.96 and 0.89 for combined rotation, indicating good concurrent reliability. The area under the ROC curve was 0.95 (95% CI 0.91-0.98) for rotation to both sides. Intra-rater reliability for eyeball estimation was excellent with Fleiss Kappa 0.79 for right rotation and left rotation. The results of this study indicate that the FRT is a valid and reliable test to detect impairment of upper cervical ROM in patients with headache.

  9. Development and validation of an automated operational modal analysis algorithm for vibration-based monitoring and tensile load estimation

    NASA Astrophysics Data System (ADS)

    Rainieri, Carlo; Fabbrocino, Giovanni

    2015-08-01

    In the last few decades large research efforts have been devoted to the development of methods for automated detection of damage and degradation phenomena at an early stage. Modal-based damage detection techniques are well-established methods, whose effectiveness for Level 1 (existence) and Level 2 (location) damage detection is demonstrated by several studies. The indirect estimation of tensile loads in cables and tie-rods is another attractive application of vibration measurements. It provides interesting opportunities for cheap and fast quality checks in the construction phase, as well as for safety evaluations and structural maintenance over the structure lifespan. However, the lack of automated modal identification and tracking procedures has been for long a relevant drawback to the extensive application of the above-mentioned techniques in the engineering practice. An increasing number of field applications of modal-based structural health and performance assessment are appearing after the development of several automated output-only modal identification procedures in the last few years. Nevertheless, additional efforts are still needed to enhance the robustness of automated modal identification algorithms, control the computational efforts and improve the reliability of modal parameter estimates (in particular, damping). This paper deals with an original algorithm for automated output-only modal parameter estimation. Particular emphasis is given to the extensive validation of the algorithm based on simulated and real datasets in view of continuous monitoring applications. The results point out that the algorithm is fairly robust and demonstrate its ability to provide accurate and precise estimates of the modal parameters, including damping ratios. As a result, it has been used to develop systems for vibration-based estimation of tensile loads in cables and tie-rods. Promising results have been achieved for non-destructive testing as well as continuous

  10. Estimating activity energy expenditure: how valid are physical activity questionnaires?

    PubMed

    Neilson, Heather K; Robson, Paula J; Friedenreich, Christine M; Csizmadi, Ilona

    2008-02-01

    Activity energy expenditure (AEE) is the modifiable component of total energy expenditure (TEE) derived from all activities, both volitional and nonvolitional. Because AEE may affect health, there is interest in its estimation in free-living people. Physical activity questionnaires (PAQs) could be a feasible approach to AEE estimation in large populations, but it is unclear whether or not any PAQ is valid for this purpose. Our aim was to explore the validity of existing PAQs for estimating usual AEE in adults, using doubly labeled water (DLW) as a criterion measure. We reviewed 20 publications that described PAQ-to-DLW comparisons, summarized study design factors, and appraised criterion validity using mean differences (AEE(PAQ) - AEE(DLW), or TEE(PAQ) - TEE(DLW)), 95% limits of agreement, and correlation coefficients (AEE(PAQ) versus AEE(DLW) or TEE(PAQ) versus TEE(DLW)). Only 2 of 23 PAQs assessed most types of activity over the past year and indicated acceptable criterion validity, with mean differences (TEE(PAQ) - TEE(DLW)) of 10% and 2% and correlation coefficients of 0.62 and 0.63, respectively. At the group level, neither overreporting nor underreporting was more prevalent across studies. We speculate that, aside from reporting error, discrepancies between PAQ and DLW estimates may be partly attributable to 1) PAQs not including key activities related to AEE, 2) PAQs and DLW ascertaining different time periods, or 3) inaccurate assignment of metabolic equivalents to self-reported activities. Small sample sizes, use of correlation coefficients, and limited information on individual validity were problematic. Future research should address these issues to clarify the true validity of PAQs for estimating AEE.

  11. View Estimation Based on Value System

    NASA Astrophysics Data System (ADS)

    Takahashi, Yasutake; Shimada, Kouki; Asada, Minoru

    Estimation of a caregiver's view is one of the most important capabilities for a child to understand the behavior demonstrated by the caregiver, that is, to infer the intention of behavior and/or to learn the observed behavior efficiently. We hypothesize that the child develops this ability in the same way as behavior learning motivated by an intrinsic reward, that is, he/she updates the model of the estimated view of his/her own during the behavior imitated from the observation of the behavior demonstrated by the caregiver based on minimizing the estimation error of the reward during the behavior. From this view, this paper shows a method for acquiring such a capability based on a value system from which values can be obtained by reinforcement learning. The parameters of the view estimation are updated based on the temporal difference error (hereafter TD error: estimation error of the state value), analogous to the way such that the parameters of the state value of the behavior are updated based on the TD error. Experiments with simple humanoid robots show the validity of the method, and the developmental process parallel to young children's estimation of its own view during the imitation of the observed behavior of the caregiver is discussed.

  12. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  13. Estimating patient dose from CT exams that use automatic exposure control: Development and validation of methods to accurately estimate tube current values.

    PubMed

    McMillan, Kyle; Bostani, Maryam; Cagnon, Christopher H; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H; McNitt-Gray, Michael F

    2017-08-01

    The vast majority of body CT exams are performed with automatic exposure control (AEC), which adapts the mean tube current to the patient size and modulates the tube current either angularly, longitudinally or both. However, most radiation dose estimation tools are based on fixed tube current scans. Accurate estimates of patient dose from AEC scans require knowledge of the tube current values, which is usually unavailable. The purpose of this work was to develop and validate methods to accurately estimate the tube current values prescribed by one manufacturer's AEC system to enable accurate estimates of patient dose. Methods were developed that took into account available patient attenuation information, user selected image quality reference parameters and x-ray system limits to estimate tube current values for patient scans. Methods consistent with AAPM Report 220 were developed that used patient attenuation data that were: (a) supplied by the manufacturer in the CT localizer radiograph and (b) based on a simulated CT localizer radiograph derived from image data. For comparison, actual tube current values were extracted from the projection data of each patient. Validation of each approach was based on data collected from 40 pediatric and adult patients who received clinically indicated chest (n = 20) and abdomen/pelvis (n = 20) scans on a 64 slice multidetector row CT (Sensation 64, Siemens Healthcare, Forchheim, Germany). For each patient dataset, the following were collected with Institutional Review Board (IRB) approval: (a) projection data containing actual tube current values at each projection view, (b) CT localizer radiograph (topogram) and (c) reconstructed image data. Tube current values were estimated based on the actual topogram (actual-topo) as well as the simulated topogram based on image data (sim-topo). Each of these was compared to the actual tube current values from the patient scan. In addition, to assess the accuracy of each method in estimating

  14. Validation of Satellite-based Rainfall Estimates for Severe Storms (Hurricanes & Tornados)

    NASA Astrophysics Data System (ADS)

    Nourozi, N.; Mahani, S.; Khanbilvardi, R.

    2005-12-01

    Severe storms such as hurricanes and tornadoes cause devastating damages, almost every year, over a large section of the United States. More accurate forecasting intensity and track of a heavy storm can help to reduce if not to prevent its damages to lives, infrastructure, and economy. Estimating accurate high resolution quantitative precipitation (QPE) from a hurricane, required to improve the forecasting and warning capabilities, is still a challenging problem because of physical characteristics of the hurricane even when it is still over the ocean. Satellite imagery seems to be a valuable source of information for estimating and forecasting heavy precipitation and also flash floods, particularly for over the oceans where the traditional ground-based gauge and radar sources cannot provide any information. To improve the capability of a rainfall retrieval algorithm for estimating QPE of severe storms, its product is evaluated in this study. High (hourly 4km x 4km) resolutions satellite infrared-based rainfall products, from the NESDIS Hydro-Estimator (HE) and also PERSIANN (Precipitation Estimation from Remotely Sensed Information using an Artificial Neural Networks) algorithms, have been tested against NEXRAD stage-IV and rain gauge observations in this project. Three strong hurricanes: Charley (category 4), Jeanne (category 3), and Ivan (category 3) that caused devastating damages over Florida in the summer 2004, have been considered to be investigated. Preliminary results demonstrate that HE tends to underestimate rain rates when NEXRAD shows heavy storm (rain rates greater than 25 mm/hr) and to overestimate when NEXRAD gives low rainfall amounts, but PERSIANN tends to underestimate rain rates, in general.

  15. Towards valid 'serious non-fatal injury' indicators for international comparisons based on probability of admission estimates.

    PubMed

    Cryer, Colin; Miller, Ted R; Lyons, Ronan A; Macpherson, Alison K; Pérez, Katherine; Petridou, Eleni Th; Dessypris, Nick; Davie, Gabrielle S; Gulliver, Pauline J; Lauritsen, Jens; Boufous, Soufiane; Lawrence, Bruce; de Graaf, Brandon; Steiner, Claudia A

    2017-02-01

    Governments wish to compare their performance in preventing serious injury. International comparisons based on hospital inpatient records are typically contaminated by variations in health services utilisation. To reduce these effects, a serious injury case definition has been proposed based on diagnoses with a high probability of inpatient admission (PrA). The aim of this paper was to identify diagnoses with estimated high PrA for selected developed countries. The study population was injured persons of all ages who attended emergency department (ED) for their injury in regions of Canada, Denmark, Greece, Spain and the USA. International Classification of Diseases (ICD)-9 or ICD-10 4-digit/character injury diagnosis-specific ED attendance and inpatient admission counts were provided, based on a common protocol. Diagnosis-specific and region-specific PrAs with 95% CIs were calculated. The results confirmed that femoral fractures have high PrA across all countries studied. Strong evidence for high PrA also exists for fracture of base of skull with cerebral laceration and contusion; intracranial haemorrhage; open fracture of radius, ulna, tibia and fibula; pneumohaemothorax and injury to the liver and spleen. Slightly weaker evidence exists for cerebellar or brain stem laceration; closed fracture of the tibia and fibula; open and closed fracture of the ankle; haemothorax and injury to the heart and lung. Using a large study size, we identified injury diagnoses with high estimated PrAs. These diagnoses can be used as the basis for more valid international comparisons of life-threatening injury, based on hospital discharge data, for countries with well-developed healthcare and data collection systems. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  16. Validation of the physical and RBE-weighted dose estimator based on PHITS coupled with a microdosimetric kinetic model for proton therapy

    PubMed Central

    Sato, Tatsuhiko; Kumada, Hiroaki; Koketsu, Junichi; Takei, Hideyuki; Sakurai, Hideyuki; Sakae, Takeji

    2018-01-01

    Abstract The microdosimetric kinetic model (MKM) is widely used for estimating relative biological effectiveness (RBE)-weighted doses for various radiotherapies because it can determine the surviving fraction of irradiated cells based on only the lineal energy distribution, and it is independent of the radiation type and ion species. However, the applicability of the method to proton therapy has not yet been investigated thoroughly. In this study, we validated the RBE-weighted dose calculated by the MKM in tandem with the Monte Carlo code PHITS for proton therapy by considering the complete simulation geometry of the clinical proton beam line. The physical dose, lineal energy distribution, and RBE-weighted dose for a 155 MeV mono-energetic and spread-out Bragg peak (SOBP) beam of 60 mm width were evaluated. In estimating the physical dose, the calculated depth dose distribution by irradiating the mono-energetic beam using PHITS was consistent with the data measured by a diode detector. A maximum difference of 3.1% in the depth distribution was observed for the SOBP beam. In the RBE-weighted dose validation, the calculated lineal energy distributions generally agreed well with the published measurement data. The calculated and measured RBE-weighted doses were in excellent agreement, except at the Bragg peak region of the mono-energetic beam, where the calculation overestimated the measured data by ~15%. This research has provided a computational microdosimetric approach based on a combination of PHITS and MKM for typical clinical proton beams. The developed RBE-estimator function has potential application in the treatment planning system for various radiotherapies. PMID:29087492

  17. Postcraniometric sex and ancestry estimation in South Africa: a validation study.

    PubMed

    Liebenberg, Leandi; Krüger, Gabriele C; L'Abbé, Ericka N; Stull, Kyra E

    2018-05-24

    With the acceptance of the Daubert criteria as the standards for best practice in forensic anthropological research, more emphasis is being placed on the validation of published methods. Methods, both traditional and novel, need to be validated, adjusted, and refined for optimal performance within forensic anthropological analyses. Recently, a custom postcranial database of modern South Africans was created for use in Fordisc 3.1. Classification accuracies of up to 85% for ancestry estimation and 98% for sex estimation were achieved using a multivariate approach. To measure the external validity and report more realistic performance statistics, an independent sample was tested. The postcrania from 180 black, white, and colored South Africans were measured and classified using the custom postcranial database. A decrease in accuracy was observed for both ancestry estimation (79%) and sex estimation (95%) of the validation sample. When incorporating both sex and ancestry simultaneously, the method achieved 70% accuracy, and 79% accuracy when sex-specific ancestry analyses were run. Classification matrices revealed that postcrania were more likely to misclassify as a result of ancestry rather than sex. While both sex and ancestry influence the size of an individual, sex differences are more marked in the postcranial skeleton and are therefore easier to identify. The external validity of the postcranial database was verified and therefore shown to be a useful tool for forensic casework in South Africa. While the classification rates were slightly lower than the original method, this is expected when a method is generalized.

  18. Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates

    PubMed Central

    Malone, Brian J.

    2017-01-01

    Spectrotemporal receptive field (STRF) characterization is a central goal of auditory physiology. STRFs are often approximated by the spike-triggered average (STA), which reflects the average stimulus preceding a spike. In many cases, the raw STA is subjected to a threshold defined by gain values expected by chance. However, such correction methods have not been universally adopted, and the consequences of specific gain-thresholding approaches have not been investigated systematically. Here, we evaluate two classes of statistical correction techniques, using the resulting STRF estimates to predict responses to a novel validation stimulus. The first, more traditional technique eliminated STRF pixels (time-frequency bins) with gain values expected by chance. This correction method yielded significant increases in prediction accuracy, including when the threshold setting was optimized for each unit. The second technique was a two-step thresholding procedure wherein clusters of contiguous pixels surviving an initial gain threshold were then subjected to a cluster mass threshold based on summed pixel values. This approach significantly improved upon even the best gain-thresholding techniques. Additional analyses suggested that allowing threshold settings to vary independently for excitatory and inhibitory subfields of the STRF resulted in only marginal additional gains, at best. In summary, augmenting reverse correlation techniques with principled statistical correction choices increased prediction accuracy by over 80% for multi-unit STRFs and by over 40% for single-unit STRFs, furthering the interpretational relevance of the recovered spectrotemporal filters for auditory systems analysis. PMID:28877194

  19. Comparison of TRMM Ground Validation and Satellite Rain Intensity Estimates

    NASA Technical Reports Server (NTRS)

    Wolff, David B.; Lawrence, Richard

    2005-01-01

    The Tropical Rainfall Measuring Mission (TRMM) Ground Validation (GV) Program began in the late 1980's and has provided a wealth of data and resources for validating TRMM satellite estimates. The TRMM GV program's main operational task is to provide rainfall products for four sites: Darwin, Australia (DARW); Houston, Texas (HSTN); Kwajalein, Republic of the Marshall Islands (KWAJ); and, Melbourne, Florida (MELB). A comparison between TRMM Ground Validation (Version 5) and Satellite (Version 6) rain intensity estimates is presented. The full suite of Version 6 satellite data is currently being generated by the TRMM Science Data and Information System (TSDIS) and should be completed some time near the end of 2005. The gridded satellite product (3G68) will be compared to GV Level II rain-intensity and -type maps (2A53 and 2A54, respectively). The 3G68 product represents a 0.5 deg x 0.5 deg data grid providing estimates of rain intensities from the TRMM Precipitation Radar (PR), Microwave Imager (TMI) and Combined (COM) algorithms. The comparisons will be sub-setted according to geographical type (land, coast and ocean). A bias statistic will be presented that provides quantification of the relative differences between the various estimators. Previous comparisons of an interim satellite product (Version 6a) showed that all of the estimates (GV and satellite) are converging, with some expected discrepancies. The convergence of the GV and satellite estimates bodes well for expectations for the proposed Global Precipitation Measurement (GPM) program and this study and others are being leveraged towards planning GV goals for GPM.

  20. Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melius, J.; Margolis, R.; Ong, S.

    2013-12-01

    A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and Californiamore » to compare modeled results to actual on-the-ground measurements.« less

  1. Comparison of Methods for Estimating Prevalence of Chronic Diseases and Health Behaviors for Small Geographic Areas: Boston Validation Study, 2013

    PubMed Central

    Holt, James B.; Zhang, Xingyou; Lu, Hua; Shah, Snehal N.; Dooley, Daniel P.; Matthews, Kevin A.; Croft, Janet B.

    2017-01-01

    Introduction Local health authorities need small-area estimates for prevalence of chronic diseases and health behaviors for multiple purposes. We generated city-level and census-tract–level prevalence estimates of 27 measures for the 500 largest US cities. Methods To validate the methodology, we constructed multilevel logistic regressions to predict 10 selected health indicators among adults aged 18 years or older by using 2013 Behavioral Risk Factor Surveillance System (BRFSS) data; we applied their predicted probabilities to census population data to generate city-level, neighborhood-level, and zip-code–level estimates for the city of Boston, Massachusetts. Results By comparing the predicted estimates with their corresponding direct estimates from a locally administered survey (Boston BRFSS 2010 and 2013), we found that our model-based estimates for most of the selected health indicators at the city level were close to the direct estimates from the local survey. We also found strong correlation between the model-based estimates and direct survey estimates at neighborhood and zip code levels for most indicators. Conclusion Findings suggest that our model-based estimates are reliable and valid at the city level for certain health outcomes. Local health authorities can use the neighborhood-level estimates if high quality local health survey data are not otherwise available. PMID:29049020

  2. Center of pressure based segment inertial parameters validation

    PubMed Central

    Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice; Venture, Gentiane

    2017-01-01

    By proposing efficient methods for estimating Body Segment Inertial Parameters’ (BSIP) estimation and validating them with a force plate, it is possible to improve the inverse dynamic computations that are necessary in multiple research areas. Until today a variety of studies have been conducted to improve BSIP estimation but to our knowledge a real validation has never been completely successful. In this paper, we propose a validation method using both kinematic and kinetic parameters (contact forces) gathered from optical motion capture system and a force plate respectively. To compare BSIPs, we used the measured contact forces (Force plate) as the ground truth, and reconstructed the displacements of the Center of Pressure (COP) using inverse dynamics from two different estimation techniques. Only minor differences were seen when comparing the estimated segment masses. Their influence on the COP computation however is large and the results show very distinguishable patterns of the COP movements. Improving BSIP techniques is crucial and deviation from the estimations can actually result in large errors. This method could be used as a tool to validate BSIP estimation techniques. An advantage of this approach is that it facilitates the comparison between BSIP estimation methods and more specifically it shows the accuracy of those parameters. PMID:28662090

  3. A mathematical method for verifying the validity of measured information about the flows of energy resources based on the state estimation theory

    NASA Astrophysics Data System (ADS)

    Pazderin, A. V.; Sof'in, V. V.; Samoylenko, V. O.

    2015-11-01

    Efforts aimed at improving energy efficiency in all branches of the fuel and energy complex shall be commenced with setting up a high-tech automated system for monitoring and accounting energy resources. Malfunctions and failures in the measurement and information parts of this system may distort commercial measurements of energy resources and lead to financial risks for power supplying organizations. In addition, measurement errors may be connected with intentional distortion of measurements for reducing payment for using energy resources on the consumer's side, which leads to commercial loss of energy resource. The article presents a universal mathematical method for verifying the validity of measurement information in networks for transporting energy resources, such as electricity and heat, petroleum, gas, etc., based on the state estimation theory. The energy resource transportation network is represented by a graph the nodes of which correspond to producers and consumers, and its branches stand for transportation mains (power lines, pipelines, and heat network elements). The main idea of state estimation is connected with obtaining the calculated analogs of energy resources for all available measurements. Unlike "raw" measurements, which contain inaccuracies, the calculated flows of energy resources, called estimates, will fully satisfy the suitability condition for all state equations describing the energy resource transportation network. The state equations written in terms of calculated estimates will be already free from residuals. The difference between a measurement and its calculated analog (estimate) is called in the estimation theory an estimation remainder. The obtained large values of estimation remainders are an indicator of high errors of particular energy resource measurements. By using the presented method it is possible to improve the validity of energy resource measurements, to estimate the transportation network observability, to eliminate

  4. Validity and reliability of Optojump photoelectric cells for estimating vertical jump height.

    PubMed

    Glatthorn, Julia F; Gouge, Sylvain; Nussbaumer, Silvio; Stauffacher, Simone; Impellizzeri, Franco M; Maffiuletti, Nicola A

    2011-02-01

    Vertical jump is one of the most prevalent acts performed in several sport activities. It is therefore important to ensure that the measurements of vertical jump height made as a part of research or athlete support work have adequate validity and reliability. The aim of this study was to evaluate concurrent validity and reliability of the Optojump photocell system (Microgate, Bolzano, Italy) with force plate measurements for estimating vertical jump height. Twenty subjects were asked to perform maximal squat jumps and countermovement jumps, and flight time-derived jump heights obtained by the force plate were compared with those provided by Optojump, to examine its concurrent (criterion-related) validity (study 1). Twenty other subjects completed the same jump series on 2 different occasions (separated by 1 week), and jump heights of session 1 were compared with session 2, to investigate test-retest reliability of the Optojump system (study 2). Intraclass correlation coefficients (ICCs) for validity were very high (0.997-0.998), even if a systematic difference was consistently observed between force plate and Optojump (-1.06 cm; p < 0.001). Test-retest reliability of the Optojump system was excellent, with ICCs ranging from 0.982 to 0.989, low coefficients of variation (2.7%), and low random errors (±2.81 cm). The Optojump photocell system demonstrated strong concurrent validity and excellent test-retest reliability for the estimation of vertical jump height. We propose the following equation that allows force plate and Optojump results to be used interchangeably: force plate jump height (cm) = 1.02 × Optojump jump height + 0.29. In conclusion, the use of Optojump photoelectric cells is legitimate for field-based assessments of vertical jump height.

  5. Validation of equations for pleural effusion volume estimation by ultrasonography.

    PubMed

    Hassan, Maged; Rizk, Rana; Essam, Hatem; Abouelnour, Ahmed

    2017-12-01

    To validate the accuracy of previously published equations that estimate pleural effusion volume using ultrasonography. Only equations using simple measurements were tested. Three measurements were taken at the posterior axillary line for each case with effusion: lateral height of effusion ( H ), distance between collapsed lung and chest wall ( C ) and distance between lung and diaphragm ( D ). Cases whose effusion was aspirated to dryness were included and drained volume was recorded. Intra-class correlation coefficient (ICC) was used to determine the predictive accuracy of five equations against the actual volume of aspirated effusion. 46 cases with effusion were included. The most accurate equation in predicting effusion volume was ( H  +  D ) × 70 (ICC 0.83). The simplest and yet accurate equation was H  × 100 (ICC 0.79). Pleural effusion height measured by ultrasonography gives a reasonable estimate of effusion volume. Incorporating distance between lung base and diaphragm into estimation improves accuracy from 79% with the first method to 83% with the latter.

  6. Validity and reliability of Nike + Fuelband for estimating physical activity energy expenditure.

    PubMed

    Tucker, Wesley J; Bhammar, Dharini M; Sawyer, Brandon J; Buman, Matthew P; Gaesser, Glenn A

    2015-01-01

    The Nike + Fuelband is a commercially available, wrist-worn accelerometer used to track physical activity energy expenditure (PAEE) during exercise. However, validation studies assessing the accuracy of this device for estimating PAEE are lacking. Therefore, this study examined the validity and reliability of the Nike + Fuelband for estimating PAEE during physical activity in young adults. Secondarily, we compared PAEE estimation of the Nike + Fuelband with the previously validated SenseWear Armband (SWA). Twenty-four participants (n = 24) completed two, 60-min semi-structured routines consisting of sedentary/light-intensity, moderate-intensity, and vigorous-intensity physical activity. Participants wore a Nike + Fuelband and SWA, while oxygen uptake was measured continuously with an Oxycon Mobile (OM) metabolic measurement system (criterion). The Nike + Fuelband (ICC = 0.77) and SWA (ICC = 0.61) both demonstrated moderate to good validity. PAEE estimates provided by the Nike + Fuelband (246 ± 67 kcal) and SWA (238 ± 57 kcal) were not statistically different than OM (243 ± 67 kcal). Both devices also displayed similar mean absolute percent errors for PAEE estimates (Nike + Fuelband = 16 ± 13 %; SWA = 18 ± 18 %). Test-retest reliability for PAEE indicated good stability for Nike + Fuelband (ICC = 0.96) and SWA (ICC = 0.90). The Nike + Fuelband provided valid and reliable estimates of PAEE, that are similar to the previously validated SWA, during a routine that included approximately equal amounts of sedentary/light-, moderate- and vigorous-intensity physical activity.

  7. Vehicle Lateral State Estimation Based on Measured Tyre Forces

    PubMed Central

    Tuononen, Ari J.

    2009-01-01

    Future active safety systems need more accurate information about the state of vehicles. This article proposes a method to evaluate the lateral state of a vehicle based on measured tyre forces. The tyre forces of two tyres are estimated from optically measured tyre carcass deflections and transmitted wirelessly to the vehicle body. The two remaining tyres are so-called virtual tyre sensors, the forces of which are calculated from the real tyre sensor estimates. The Kalman filter estimator for lateral vehicle state based on measured tyre forces is presented, together with a simple method to define adaptive measurement error covariance depending on the driving condition of the vehicle. The estimated yaw rate and lateral velocity are compared with the validation sensor measurements. PMID:22291535

  8. Validation of the physical and RBE-weighted dose estimator based on PHITS coupled with a microdosimetric kinetic model for proton therapy.

    PubMed

    Takada, Kenta; Sato, Tatsuhiko; Kumada, Hiroaki; Koketsu, Junichi; Takei, Hideyuki; Sakurai, Hideyuki; Sakae, Takeji

    2018-01-01

    The microdosimetric kinetic model (MKM) is widely used for estimating relative biological effectiveness (RBE)-weighted doses for various radiotherapies because it can determine the surviving fraction of irradiated cells based on only the lineal energy distribution, and it is independent of the radiation type and ion species. However, the applicability of the method to proton therapy has not yet been investigated thoroughly. In this study, we validated the RBE-weighted dose calculated by the MKM in tandem with the Monte Carlo code PHITS for proton therapy by considering the complete simulation geometry of the clinical proton beam line. The physical dose, lineal energy distribution, and RBE-weighted dose for a 155 MeV mono-energetic and spread-out Bragg peak (SOBP) beam of 60 mm width were evaluated. In estimating the physical dose, the calculated depth dose distribution by irradiating the mono-energetic beam using PHITS was consistent with the data measured by a diode detector. A maximum difference of 3.1% in the depth distribution was observed for the SOBP beam. In the RBE-weighted dose validation, the calculated lineal energy distributions generally agreed well with the published measurement data. The calculated and measured RBE-weighted doses were in excellent agreement, except at the Bragg peak region of the mono-energetic beam, where the calculation overestimated the measured data by ~15%. This research has provided a computational microdosimetric approach based on a combination of PHITS and MKM for typical clinical proton beams. The developed RBE-estimator function has potential application in the treatment planning system for various radiotherapies. © The Author 2017. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  9. Using Ground-Based Measurements and Retrievals to Validate Satellite Data

    NASA Technical Reports Server (NTRS)

    Dong, Xiquan

    2002-01-01

    The proposed research is to use the DOE ARM ground-based measurements and retrievals as the ground-truth references for validating satellite cloud results and retrieving algorithms. This validation effort includes four different ways: (1) cloud properties on different satellites, therefore different sensors, TRMM VIRS and TERRA MODIS; (2) cloud properties at different climatic regions, such as DOE ARM SGP, NSA, and TWP sites; (3) different cloud types, low and high level cloud properties; and (4) day and night retrieving algorithms. Validation of satellite-retrieved cloud properties is very difficult and a long-term effort because of significant spatial and temporal differences between the surface and satellite observing platforms. The ground-based measurements and retrievals, only carefully analyzed and validated, can provide a baseline for estimating errors in the satellite products. Even though the validation effort is so difficult, a significant progress has been made during the proposed study period, and the major accomplishments are summarized in the follow.

  10. Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight

    NASA Technical Reports Server (NTRS)

    Suorsa, Raymond; Sridhar, Banavar

    1991-01-01

    A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.

  11. Validation of Nimbus-7 temperature-humidity infrared radiometer estimates of cloud type and amount

    NASA Technical Reports Server (NTRS)

    Stowe, L. L.

    1982-01-01

    Estimates of clear and low, middle and high cloud amount in fixed geographical regions approximately (160 km) squared are being made routinely from 11.5 micron radiance measurements of the Nimbus-7 Temperature-Humidity Infrared Radiometer (THIR). The purpose of validation is to determine the accuracy of the THIR cloud estimates. Validation requires that a comparison be made between the THIR estimates of cloudiness and the 'true' cloudiness. The validation results reported in this paper use human analysis of concurrent but independent satellite images with surface meteorological and radiosonde observations to approximate the 'true' cloudiness. Regression and error analyses are used to estimate the systematic and random errors of THIR derived clear amount.

  12. The predictive validity of quality of evidence grades for the stability of effect estimates was low: a meta-epidemiological study.

    PubMed

    Gartlehner, Gerald; Dobrescu, Andreea; Evans, Tammeka Swinson; Bann, Carla; Robinson, Karen A; Reston, James; Thaler, Kylie; Skelly, Andrea; Glechner, Anna; Peterson, Kimberly; Kien, Christina; Lohr, Kathleen N

    2016-02-01

    To determine the predictive validity of the U.S. Evidence-based Practice Center (EPC) approach to GRADE (Grading of Recommendations Assessment, Development and Evaluation). Based on Cochrane reports with outcomes graded as high quality of evidence (QOE), we prepared 160 documents which represented different levels of QOE. Professional systematic reviewers dually graded the QOE. For each document, we determined whether estimates were concordant with high QOE estimates of the Cochrane reports. We compared the observed proportion of concordant estimates with the expected proportion from an international survey. To determine the predictive validity, we used the Hosmer-Lemeshow test to assess calibration and the C (concordance) index to assess discrimination. The predictive validity of the EPC approach to GRADE was limited. Estimates graded as high QOE were less likely, estimates graded as low or insufficient QOE more likely to remain stable than expected. The EPC approach to GRADE could not reliably predict the likelihood that individual bodies of evidence remain stable as new evidence becomes available. C-indices ranged between 0.56 (95% CI, 0.47 to 0.66) and 0.58 (95% CI, 0.50 to 0.67) indicating a low discriminatory ability. The limited predictive validity of the EPC approach to GRADE seems to reflect a mismatch between expected and observed changes in treatment effects as bodies of evidence advance from insufficient to high QOE. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. A global validation of ERA-Interim integrated water vapor estimates using ground-based GNSS observations

    NASA Astrophysics Data System (ADS)

    Ahmed, F.; Dousa, J.; Hunegnaw, A.; Teferle, F. N.; Bingley, R.

    2017-12-01

    Integrated water vapor (IWV) derived from climate reanalysis models, such as the European Centre for Medium-range Weather Forecasts (ECMWF) ReAnalysis-Interim (ERA-Interim), is widely used in many atmospheric applications. Therefore, it is of interest to assess the quality of this reanalysis product using available observations. Observations from Global Navigation Satellite Systems (GNSS) are, as of now, available for a period of over 2 decades and their global availability makes it possible to validate the IWV obtained from climate reanalysis models in different geographical and climatic regions. In this study, primarily, three 5-year long homogeneously reprocessed GNSS-derived IWV datasets containing over 400 globally distributed ground-based GNSS stations have been used to validate the IWV estimates obtained from the ERA-Interim climate reanalysis model in 25 different climate zones. The IWV from ERA-Interim has been obtained by vertically integrating the specific humidity at all model levels above the locations of GNSS stations. It has been studied how the difference between the ERA-Interim IWV and the GNSS-derived IWV varies with respect to the different climate zones as well as with respect to the difference in the model orography and latitude. The results show a dependence of the ability of ERA-Interim to model the IWV on difference in climate types and latitude. This dependence, however, is dictated by the concentration of water vapor in different climate zones and at different latitudes. Furthermore, as a secondary focus of this study, the weighted mean atmospheric temperature (Tm) obtained from ERA-Interim has been compared to its equivalent obtained using two widely used approximations globally.

  14. The Riso-Hudson Enneagram Type Indicator: Estimates of Reliability and Validity

    ERIC Educational Resources Information Center

    Newgent, Rebecca A.; Parr, Patricia E.; Newman, Isadore; Higgins, Kristin K.

    2004-01-01

    This investigation was conducted to estimate the reliability and validity of scores on the Riso-Hudson Enneagram Type Indicator (D. R. Riso & R. Hudson, 1999a). Results of 287 participants were analyzed. Alpha suggests an adequate degree of internal consistency. Evidence provides mixed support for construct validity using correlational and…

  15. Reliability and validity of Arabic Rapid Estimate of Adult Literacy in Dentistry (AREALD-30) in Saudi Arabia.

    PubMed

    Tadakamadla, Santosh Kumar; Quadri, Mir Faeq Ali; Pakpour, Amir H; Zailai, Abdulaziz M; Sayed, Mohammed E; Mashyakhy, Mohammed; Inamdar, Aadil S; Tadakamadla, Jyothi

    2014-09-29

    To evaluate the reliability and validity of Arabic Rapid Estimate of Adult Literacy in Dentistry (AREALD-30) in Saudi Arabia. A convenience sample of 200 subjects was approached, of which 177 agreed to participate giving a response rate of 88.5%. Rapid Estimate of Adult Literacy in Dentistry (REALD-99), was translated into Arabic to prepare the longer and shorter versions of Arabic Rapid Estimate of Adult Literacy in Dentistry (AREALD-99 and AREALD-30). Each participant was provided with AREALD-99 which also includes words from AREALD-30. A questionnaire containing socio-behavioral information and Arabic Oral Health Impact Profile (A-OHIP-14) was also administered. Reliability of the AREALD-30 was assessed by re-administering it to 20 subjects after two weeks. Convergent and predictive validity of AREALD-30 was evaluated by its correlations with AREALD-99 and self-perceived oral health status, dental visiting habits and A-OHIP-14 respectively. Discriminant validity was assessed in relation to the educational level while construct validity was evaluated by confirmatory factor analysis (CFA). Reliability of AREALD-30 was excellent with intraclass correlation coefficient of 0.99. It exhibited good convergent and discriminant validity but poor predictive validity. CFA showed presence of two factors and infit mean-square statistics for AREALD-30 were all within the desired range of 0.50 - 2.0 in Rasch analysis. AREALD-30 showed excellent reliability, good convergent and concurrent validity, but failed to predict the differences between the subjects categorized based on their oral health outcomes.

  16. Validation of a partial coherence interferometry method for estimating retinal shape

    PubMed Central

    Verkicharla, Pavan K.; Suheimat, Marwan; Pope, James M.; Sepehrband, Farshid; Mathur, Ankit; Schmid, Katrina L.; Atchison, David A.

    2015-01-01

    To validate a simple partial coherence interferometry (PCI) based retinal shape method, estimates of retinal shape were determined in 60 young adults using off-axis PCI, with three stages of modeling using variants of the Le Grand model eye, and magnetic resonance imaging (MRI). Stage 1 and 2 involved a basic model eye without and with surface ray deviation, respectively and Stage 3 used model with individual ocular biometry and ray deviation at surfaces. Considering the theoretical uncertainty of MRI (12-14%), the results of the study indicate good agreement between MRI and all three stages of PCI modeling with <4% and <7% differences in retinal shapes along horizontal and vertical meridians, respectively. Stage 2 and Stage 3 gave slightly different retinal co-ordinates than Stage 1 and we recommend the intermediate Stage 2 as providing a simple and valid method of determining retinal shape from PCI data. PMID:26417496

  17. Validation of a partial coherence interferometry method for estimating retinal shape.

    PubMed

    Verkicharla, Pavan K; Suheimat, Marwan; Pope, James M; Sepehrband, Farshid; Mathur, Ankit; Schmid, Katrina L; Atchison, David A

    2015-09-01

    To validate a simple partial coherence interferometry (PCI) based retinal shape method, estimates of retinal shape were determined in 60 young adults using off-axis PCI, with three stages of modeling using variants of the Le Grand model eye, and magnetic resonance imaging (MRI). Stage 1 and 2 involved a basic model eye without and with surface ray deviation, respectively and Stage 3 used model with individual ocular biometry and ray deviation at surfaces. Considering the theoretical uncertainty of MRI (12-14%), the results of the study indicate good agreement between MRI and all three stages of PCI modeling with <4% and <7% differences in retinal shapes along horizontal and vertical meridians, respectively. Stage 2 and Stage 3 gave slightly different retinal co-ordinates than Stage 1 and we recommend the intermediate Stage 2 as providing a simple and valid method of determining retinal shape from PCI data.

  18. Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.

    2012-12-01

    The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some

  19. Validity of activity-based devices to estimate sleep.

    PubMed

    Weiss, Allison R; Johnson, Nathan L; Berger, Nathan A; Redline, Susan

    2010-08-15

    The aim of this study was to examine the feasibility of sleep estimation using a device designed and marketed to measure core physical activity. Thirty adolescent participants in an epidemiological research study wore 3 actigraphy devices on the wrist over a single night concurrent with polysomnography (PSG). Devices used include Actical actigraph, designed and marketed for placement around the trunk to measure physical activity, in addition to 2 standard actigraphy devices used to assess sleep-wake states: Sleepwatch actigraph and Actiwatch actigraph. Sleep-wake behaviors, including total sleep time (TST) and sleep efficiency (SE), were estimated from each wrist-device and PSG. Agreements between each device were calculated using Pearson product movement correlation and Bland-Altman plots. Statistical analyses of TST revealed strong correlations between each wrist device and PSG (r = 0.822, 0.836, and 0.722 for Sleepwatch, Actiwatch, and Actical, respectively). TST measured using the Actical correlated strongly with Sleepwatch (r = 0.796), and even stronger still with Actiwatch (r = 0.955). In analyses of SE, Actical correlated strongly with Actiwatch (r = 0.820; p < 0.0001), but not with Sleepwatch (0.405; p = 0.0266). SE determined by PSG correlated somewhat strongly with SE estimated from the Sleepwatch and Actiwatch (r = 0.619 and 0.651, respectively), but only weakly with SE estimated from the Actical (r = 0.348; p = 0.0598). The results from this study suggest that a device designed for assessment of physical activity and truncal placement can be used to measure sleep duration as reliably as devices designed for wrist use and sleep wake inference.

  20. Verification of Satellite Rainfall Estimates from the Tropical Rainfall Measuring Mission over Ground Validation Sites

    NASA Astrophysics Data System (ADS)

    Fisher, B. L.; Wolff, D. B.; Silberstein, D. S.; Marks, D. M.; Pippitt, J. L.

    2007-12-01

    The Tropical Rainfall Measuring Mission's (TRMM) Ground Validation (GV) Program was originally established with the principal long-term goal of determining the random errors and systematic biases stemming from the application of the TRMM rainfall algorithms. The GV Program has been structured around two validation strategies: 1) determining the quantitative accuracy of the integrated monthly rainfall products at GV regional sites over large areas of about 500 km2 using integrated ground measurements and 2) evaluating the instantaneous satellite and GV rain rate statistics at spatio-temporal scales compatible with the satellite sensor resolution (Simpson et al. 1988, Thiele 1988). The GV Program has continued to evolve since the launch of the TRMM satellite on November 27, 1997. This presentation will discuss current GV methods of validating TRMM operational rain products in conjunction with ongoing research. The challenge facing TRMM GV has been how to best utilize rain information from the GV system to infer the random and systematic error characteristics of the satellite rain estimates. A fundamental problem of validating space-borne rain estimates is that the true mean areal rainfall is an ideal, scale-dependent parameter that cannot be directly measured. Empirical validation uses ground-based rain estimates to determine the error characteristics of the satellite-inferred rain estimates, but ground estimates also incur measurement errors and contribute to the error covariance. Furthermore, sampling errors, associated with the discrete, discontinuous temporal sampling by the rain sensors aboard the TRMM satellite, become statistically entangled in the monthly estimates. Sampling errors complicate the task of linking biases in the rain retrievals to the physics of the satellite algorithms. The TRMM Satellite Validation Office (TSVO) has made key progress towards effective satellite validation. For disentangling the sampling and retrieval errors, TSVO has developed

  1. Validation of Statistical Models for Estimating Hospitalization Associated with Influenza and Other Respiratory Viruses

    PubMed Central

    Chan, King-Pan; Chan, Kwok-Hung; Wong, Wilfred Hing-Sang; Peiris, J. S. Malik; Wong, Chit-Ming

    2011-01-01

    Background Reliable estimates of disease burden associated with respiratory viruses are keys to deployment of preventive strategies such as vaccination and resource allocation. Such estimates are particularly needed in tropical and subtropical regions where some methods commonly used in temperate regions are not applicable. While a number of alternative approaches to assess the influenza associated disease burden have been recently reported, none of these models have been validated with virologically confirmed data. Even fewer methods have been developed for other common respiratory viruses such as respiratory syncytial virus (RSV), parainfluenza and adenovirus. Methods and Findings We had recently conducted a prospective population-based study of virologically confirmed hospitalization for acute respiratory illnesses in persons <18 years residing in Hong Kong Island. Here we used this dataset to validate two commonly used models for estimation of influenza disease burden, namely the rate difference model and Poisson regression model, and also explored the applicability of these models to estimate the disease burden of other respiratory viruses. The Poisson regression models with different link functions all yielded estimates well correlated with the virologically confirmed influenza associated hospitalization, especially in children older than two years. The disease burden estimates for RSV, parainfluenza and adenovirus were less reliable with wide confidence intervals. The rate difference model was not applicable to RSV, parainfluenza and adenovirus and grossly underestimated the true burden of influenza associated hospitalization. Conclusion The Poisson regression model generally produced satisfactory estimates in calculating the disease burden of respiratory viruses in a subtropical region such as Hong Kong. PMID:21412433

  2. Estimates of External Validity Bias When Impact Evaluations Select Sites Nonrandomly

    ERIC Educational Resources Information Center

    Bell, Stephen H.; Olsen, Robert B.; Orr, Larry L.; Stuart, Elizabeth A.

    2016-01-01

    Evaluations of educational programs or interventions are typically conducted in nonrandomly selected samples of schools or districts. Recent research has shown that nonrandom site selection can yield biased impact estimates. To estimate the external validity bias from nonrandom site selection, we combine lists of school districts that were…

  3. Fetal QRS detection and heart rate estimation: a wavelet-based approach.

    PubMed

    Almeida, Rute; Gonçalves, Hernâni; Bernardes, João; Rocha, Ana Paula

    2014-08-01

    Fetal heart rate monitoring is used for pregnancy surveillance in obstetric units all over the world but in spite of recent advances in analysis methods, there are still inherent technical limitations that bound its contribution to the improvement of perinatal indicators. In this work, a previously published wavelet transform based QRS detector, validated over standard electrocardiogram (ECG) databases, is adapted to fetal QRS detection over abdominal fetal ECG. Maternal ECG waves were first located using the original detector and afterwards a version with parameters adapted for fetal physiology was applied to detect fetal QRS, excluding signal singularities associated with maternal heartbeats. Single lead (SL) based marks were combined in a single annotator with post processing rules (SLR) from which fetal RR and fetal heart rate (FHR) measures can be computed. Data from PhysioNet with reference fetal QRS locations was considered for validation, with SLR outperforming SL including ICA based detections. The error in estimated FHR using SLR was lower than 20 bpm for more than 80% of the processed files. The median error in 1 min based FHR estimation was 0.13 bpm, with a correlation between reference and estimated FHR of 0.48, which increased to 0.73 when considering only records for which estimated FHR > 110 bpm. This allows us to conclude that the proposed methodology is able to provide a clinically useful estimation of the FHR.

  4. Validity of a Self-Report Recall Tool for Estimating Sedentary Behavior in Adults.

    PubMed

    Gomersall, Sjaan R; Pavey, Toby G; Clark, Bronwyn K; Jasman, Adib; Brown, Wendy J

    2015-11-01

    Sedentary behavior is continuing to emerge as an important target for health promotion. The purpose of this study was to determine the validity of a self-report use of time recall tool, the Multimedia Activity Recall for Children and Adults (MARCA) in estimating time spent sitting/lying, compared with a device-based measure. Fifty-eight participants (48% female, [mean ± standard deviation] 28 ± 7.4 years of age, 23.9 ± 3.05 kg/m(2)) wore an activPAL device for 24-h and the following day completed the MARCA. Pearson correlation coefficients (r) were used to analyze convergent validity of the adult MARCA compared with activPAL estimates of total sitting/lying time. Agreement was examined using Bland-Altman plots. According to activPAL estimates, participants spent 10.4 hr/day [standard deviation (SD) = 2.06] sitting or lying down while awake. The correlation between MARCA and activPAL estimates of total sit/lie time was r = .77 (95% confidence interval = 0.64-0.86; P < .001). Bland-Altman analyses revealed a mean bias of +0.59 hr/day with moderately wide limits of agreement (-2.35 hr to +3.53 hr/day). This study found a moderate to strong agreement between the adult MARCA and the activPAL, suggesting that the MARCA is an appropriate tool for the measurement of time spent sitting or lying down in an adult population.

  5. A robust vision-based sensor fusion approach for real-time pose estimation.

    PubMed

    Assa, Akbar; Janabi-Sharifi, Farrokh

    2014-02-01

    Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.

  6. Reliability and Validity of the Evidence-Based Practice Confidence (EPIC) Scale

    ERIC Educational Resources Information Center

    Salbach, Nancy M.; Jaglal, Susan B.; Williams, Jack I.

    2013-01-01

    Introduction: The reliability, minimal detectable change (MDC), and construct validity of the evidence-based practice confidence (EPIC) scale were evaluated among physical therapists (PTs) in clinical practice. Methods: A longitudinal mail survey was conducted. Internal consistency and test-retest reliability were estimated using Cronbach's alpha…

  7. Estimation and Validation of Oceanic Mass Circulation from the GRACE Mission

    NASA Technical Reports Server (NTRS)

    Boy, J.-P.; Rowlands, D. D.; Sabaka, T. J.; Luthcke, S. B.; Lemoine, F. G.

    2011-01-01

    Since the launch of the Gravity Recovery And Climate Experiment (GRACE) in March 2002, the Earth's surface mass variations have been monitored with unprecedented accuracy and resolution. Compared to the classical spherical harmonic solutions, global high-resolution mascon solutions allows the retrieval of mass variations with higher spatial and temporal sampling (2 degrees and 10 days). We present here the validation of the GRACE global mascon solutions by comparing mass estimates to a set of about 100 ocean bottom pressure (OSP) records, and show that the forward modelling of continental hydrology prior to the inversion of the K-band range rate data allows better estimates of ocean mass variations. We also validate our GRACE results to OSP variations modelled by different state-of-the-art ocean general circulation models, including ECCO (Estimating the Circulation and Climate of the Ocean) and operational and reanalysis from the MERCATOR project.

  8. METAPHOR: Probability density estimation for machine learning based photometric redshifts

    NASA Astrophysics Data System (ADS)

    Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

    2017-06-01

    We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).

  9. PHEV Energy Use Estimation: Validating the Gamma Distribution for Representing the Random Daily Driving Distance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Zhenhong; Dong, Jing; Liu, Changzheng

    2012-01-01

    The petroleum and electricity consumptions of plug-in hybrid electric vehicles (PHEVs) are sensitive to the variation of daily vehicle miles traveled (DVMT). Some studies assume DVMT to follow a Gamma distribution, but such a Gamma assumption is yet to be validated. This study finds the Gamma assumption valid in the context of PHEV energy analysis, based on continuous GPS travel data of 382 vehicles, each tracked for at least 183 days. The validity conclusion is based on the found small prediction errors, resulting from the Gamma assumption, in PHEV petroleum use, electricity use, and energy cost. The finding that themore » Gamma distribution is valid and reliable is important. It paves the way for the Gamma distribution to be assumed for analyzing energy uses of PHEVs in the real world. The Gamma distribution can be easily specified with very few pieces of driver information and is relatively easy for mathematical manipulation. Given the validation in this study, the Gamma distribution can now be used with better confidence in a variety of applications, such as improving vehicle consumer choice models, quantifying range anxiety for battery electric vehicles, investigating roles of charging infrastructure, and constructing online calculators that provide personal estimates of PHEV energy use.« less

  10. Validity of a digital diet estimation method for use with preschool children

    USDA-ARS?s Scientific Manuscript database

    The validity of using the Remote Food Photography Method (RFPM) for measuring food intake of minority preschool children's intake is not well documented. The aim of the study was to determine the validity of intake estimations made by human raters using the RFPM compared with those obtained by weigh...

  11. Large-scale precipitation estimation using Kalpana-1 IR measurements and its validation using GPCP and GPCC data

    NASA Astrophysics Data System (ADS)

    Prakash, Satya; Mahesh, C.; Gairola, Rakesh M.

    2011-12-01

    Large-scale precipitation estimation is very important for climate science because precipitation is a major component of the earth's water and energy cycles. In the present study, the GOES precipitation index technique has been applied to the Kalpana-1 satellite infrared (IR) images of every three-hourly, i.e., of 0000, 0300, 0600,…., 2100 hours UTC, for rainfall estimation as a preparatory to the INSAT-3D. After the temperatures of all the pixels in a grid are known, they are distributed to generate a three-hourly 24-class histogram of brightness temperatures of IR (10.5-12.5 μm) images for a 1.0° × 1.0° latitude/longitude box. The daily, monthly, and seasonal rainfall have been estimated using these three-hourly rain estimates for the entire south-west monsoon period of 2009 in the present study. To investigate the potential of these rainfall estimates, the validation of monthly and seasonal rainfall estimates has been carried out using the Global Precipitation Climatology Project and Global Precipitation Climatology Centre data. The validation results show that the present technique works very well for the large-scale precipitation estimation qualitatively as well as quantitatively. The results also suggest that the simple IR-based estimation technique can be used to estimate rainfall for tropical areas at a larger temporal scale for climatological applications.

  12. Rule-Based Flight Software Cost Estimation

    NASA Technical Reports Server (NTRS)

    Stukes, Sherry A.; Spagnuolo, John N. Jr.

    2015-01-01

    This paper discusses the fundamental process for the computation of Flight Software (FSW) cost estimates. This process has been incorporated in a rule-based expert system [1] that can be used for Independent Cost Estimates (ICEs), Proposals, and for the validation of Cost Analysis Data Requirements (CADRe) submissions. A high-level directed graph (referred to here as a decision graph) illustrates the steps taken in the production of these estimated costs and serves as a basis of design for the expert system described in this paper. Detailed discussions are subsequently given elaborating upon the methodology, tools, charts, and caveats related to the various nodes of the graph. We present general principles for the estimation of FSW using SEER-SEM as an illustration of these principles when appropriate. Since Source Lines of Code (SLOC) is a major cost driver, a discussion of various SLOC data sources for the preparation of the estimates is given together with an explanation of how contractor SLOC estimates compare with the SLOC estimates used by JPL. Obtaining consistency in code counting will be presented as well as factors used in reconciling SLOC estimates from different code counters. When sufficient data is obtained, a mapping into the JPL Work Breakdown Structure (WBS) from the SEER-SEM output is illustrated. For across the board FSW estimates, as was done for the NASA Discovery Mission proposal estimates performed at JPL, a comparative high-level summary sheet for all missions with the SLOC, data description, brief mission description and the most relevant SEER-SEM parameter values is given to illustrate an encapsulation of the used and calculated data involved in the estimates. The rule-based expert system described provides the user with inputs useful or sufficient to run generic cost estimation programs. This system's incarnation is achieved via the C Language Integrated Production System (CLIPS) and will be addressed at the end of this paper.

  13. Base Flow Model Validation

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John

    2011-01-01

    A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.

  14. SPECTRAL data-based estimation of soil heat flux

    USGS Publications Warehouse

    Singh, Ramesh K.; Irmak, A.; Walter-Shea, Elizabeth; Verma, S.B.; Suyker, A.E.

    2011-01-01

    Numerous existing spectral-based soil heat flux (G) models have shown wide variation in performance for maize and soybean cropping systems in Nebraska, indicating the need for localized calibration and model development. The objectives of this article are to develop a semi-empirical model to estimate G from a normalized difference vegetation index (NDVI) and net radiation (Rn) for maize (Zea mays L.) and soybean (Glycine max L.) fields in the Great Plains, and present the suitability of the developed model to estimate G under similar and different soil and management conditions. Soil heat fluxes measured in both irrigated and rainfed fields in eastern and south-central Nebraska were used for model development and validation. An exponential model that uses NDVI and Rn was found to be the best to estimate G based on r2 values. The effect of geographic location, crop, and water management practices were used to develop semi-empirical models under four case studies. Each case study has the same exponential model structure but a different set of coefficients and exponents to represent the crop, soil, and management practices. Results showed that the semi-empirical models can be used effectively for G estimation for nearby fields with similar soil properties for independent years, regardless of differences in crop type, crop rotation, and irrigation practices, provided that the crop residue from the previous year is more than 4000 kg ha-1. The coefficients calibrated from particular fields can be used at nearby fields in order to capture temporal variation in G. However, there is a need for further investigation of the models to account for the interaction effects of crop rotation and irrigation. Validation at an independent site having different soil and crop management practices showed the limitation of the semi-empirical model in estimating G under different soil and environment conditions.

  15. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    USGS Publications Warehouse

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research

  16. Model-Based Estimation of Knee Stiffness

    PubMed Central

    Pfeifer, Serge; Vallery, Heike; Hardegger, Michael; Riener, Robert; Perreault, Eric J.

    2013-01-01

    During natural locomotion, the stiffness of the human knee is modulated continuously and subconsciously according to the demands of activity and terrain. Given modern actuator technology, powered transfemoral prostheses could theoretically provide a similar degree of sophistication and function. However, experimentally quantifying knee stiffness modulation during natural gait is challenging. Alternatively, joint stiffness could be estimated in a less disruptive manner using electromyography (EMG) combined with kinetic and kinematic measurements to estimate muscle force, together with models that relate muscle force to stiffness. Here we present the first step in that process, where we develop such an approach and evaluate it in isometric conditions, where experimental measurements are more feasible. Our EMG-guided modeling approach allows us to consider conditions with antagonistic muscle activation, a phenomenon commonly observed in physiological gait. Our validation shows that model-based estimates of knee joint stiffness coincide well with experimental data obtained using conventional perturbation techniques. We conclude that knee stiffness can be accurately estimated in isometric conditions without applying perturbations, which presents an important step towards our ultimate goal of quantifying knee stiffness during gait. PMID:22801482

  17. Validation of proton stopping power ratio estimation based on dual energy CT using fresh tissue samples

    NASA Astrophysics Data System (ADS)

    Taasti, Vicki T.; Michalak, Gregory J.; Hansen, David C.; Deisher, Amanda J.; Kruse, Jon J.; Krauss, Bernhard; Muren, Ludvig P.; Petersen, Jørgen B. B.; McCollough, Cynthia H.

    2018-01-01

    Dual energy CT (DECT) has been shown, in theoretical and phantom studies, to improve the stopping power ratio (SPR) determination used for proton treatment planning compared to the use of single energy CT (SECT). However, it has not been shown that this also extends to organic tissues. The purpose of this study was therefore to investigate the accuracy of SPR estimation for fresh pork and beef tissue samples used as surrogates of human tissues. The reference SPRs for fourteen tissue samples, which included fat, muscle and femur bone, were measured using proton pencil beams. The tissue samples were subsequently CT scanned using four different scanners with different dual energy acquisition modes, giving in total six DECT-based SPR estimations for each sample. The SPR was estimated using a proprietary algorithm (syngo.via DE Rho/Z Maps, Siemens Healthcare, Forchheim, Germany) for extracting the electron density and the effective atomic number. SECT images were also acquired and SECT-based SPR estimations were performed using a clinical Hounsfield look-up table. The mean and standard deviation of the SPR over large volume-of-interests were calculated. For the six different DECT acquisition methods, the root-mean-square errors (RMSEs) for the SPR estimates over all tissue samples were between 0.9% and 1.5%. For the SECT-based SPR estimation the RMSE was 2.8%. For one DECT acquisition method, a positive bias was seen in the SPR estimates, having a mean error of 1.3%. The largest errors were found in the very dense cortical bone from a beef femur. This study confirms the advantages of DECT-based SPR estimation although good results were also obtained using SECT for most tissues.

  18. Deflection-Based Aircraft Structural Loads Estimation with Comparison to Flight

    NASA Technical Reports Server (NTRS)

    Lizotte, Andrew M.; Lokos, William A.

    2005-01-01

    Traditional techniques in structural load measurement entail the correlation of a known load with strain-gage output from the individual components of a structure or machine. The use of strain gages has proved successful and is considered the standard approach for load measurement. However, remotely measuring aerodynamic loads using deflection measurement systems to determine aeroelastic deformation as a substitute to strain gages may yield lower testing costs while improving aircraft performance through reduced instrumentation weight. With a reliable strain and structural deformation measurement system this technique was examined. The objective of this study was to explore the utility of a deflection-based load estimation, using the active aeroelastic wing F/A-18 aircraft. Calibration data from ground tests performed on the aircraft were used to derive left wing-root and wing-fold bending-moment and torque load equations based on strain gages, however, for this study, point deflections were used to derive deflection-based load equations. Comparisons between the strain-gage and deflection-based methods are presented. Flight data from the phase-1 active aeroelastic wing flight program were used to validate the deflection-based load estimation method. Flight validation revealed a strong bending-moment correlation and slightly weaker torque correlation. Development of current techniques, and future studies are discussed.

  19. Regional GRACE-based estimates of water mass variations over Australia: validation and interpretation

    NASA Astrophysics Data System (ADS)

    Seoane, L.; Ramillien, G.; Frappart, F.; Leblanc, M.

    2013-04-01

    Time series of regional 2°-by-2° GRACE solutions have been computed from 2003 to 2011 with a 10 day resolution by using an energy integral method over Australia [112° E 156° E; 44° S 10° S]. This approach uses the dynamical orbit analysis of GRACE Level 1 measurements, and specially accurate along-track K Band Range Rate (KBRR) residuals (1 μm s-1 level of error) to estimate the total water mass over continental regions. The advantages of regional solutions are a significant reduction of GRACE aliasing errors (i.e. north-south stripes) providing a more accurate estimation of water mass balance for hydrological applications. In this paper, the validation of these regional solutions over Australia is presented as well as their ability to describe water mass change as a reponse of climate forcings such as El Niño. Principal component analysis of GRACE-derived total water storage maps show spatial and temporal patterns that are consistent with independent datasets (e.g. rainfall, climate index and in-situ observations). Regional TWS show higher spatial correlations with in-situ water table measurements over Murray-Darling drainage basin (80-90%), and they offer a better localization of hydrological structures than classical GRACE global solutions (i.e. Level 2 GRGS products and 400 km ICA solutions as a linear combination of GFZ, CSR and JPL GRACE solutions).

  20. Uncertainty estimates of purity measurements based on current information: toward a "live validation" of purity methods.

    PubMed

    Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech

    2012-12-01

    To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.

  1. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.

    PubMed

    Wang, Fei; Li, Hong; Lu, Mingquan

    2017-06-30

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.

  2. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation

    PubMed Central

    Li, Hong; Lu, Mingquan

    2017-01-01

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks. PMID:28665318

  3. Validity of parent-reported weight and height of preschool children measured at home or estimated without home measurement: a validation study

    PubMed Central

    2011-01-01

    Background Parental reports are often used in large-scale surveys to assess children's body mass index (BMI). Therefore, it is important to know to what extent these parental reports are valid and whether it makes a difference if the parents measured their children's weight and height at home or whether they simply estimated these values. The aim of this study is to compare the validity of parent-reported height, weight and BMI values of preschool children (3-7 y-old), when measured at home or estimated by parents without actual measurement. Methods The subjects were 297 Belgian preschool children (52.9% male). Participation rate was 73%. A questionnaire including questions about height and weight of the children was completed by the parents. Nurses measured height and weight following standardised procedures. International age- and sex-specific BMI cut-off values were employed to determine categories of weight status and obesity. Results On the group level, no important differences in accuracy of reported height, weight and BMI were identified between parent-measured or estimated values. However, for all 3 parameters, the correlations between parental reports and nurse measurements were higher in the group of children whose body dimensions were measured by the parents. Sensitivity for underweight and overweight/obesity were respectively 73% and 47% when parents measured their child's height and weight, and 55% and 47% when parents estimated values without measurement. Specificity for underweight and overweight/obesity were respectively 82% and 97% when parents measured the children, and 75% and 93% with parent estimations. Conclusions Diagnostic measures were more accurate when parents measured their child's weight and height at home than when those dimensions were based on parental judgements. When parent-reported data on an individual level is used, the accuracy could be improved by encouraging the parents to measure weight and height of their children at home

  4. Exploring the validity of HPQ-based presenteeism measures to estimate productivity losses in the health and education sectors.

    PubMed

    Scuffham, Paul A; Vecchio, Nerina; Whiteford, Harvey A

    2014-01-01

    Illness-related presenteeism (suboptimal work performance) may be a significant factor in worker productivity. Until now, there has been no generally accepted best method of measuring presenteeism across different industries and occupations. This study sought to validate the Health and Work Performance Questionnaire (HPQ)-based measure of presenteeism across occupations and industries and assess the most appropriate method for data analysis. . Work performance was measured using the modified version of the HPQ conducted in workforce samples from the education and health workforce in Queensland, Australia (N = 30,870) during 2005 and 2006. Three approaches to data analysis of presenteeism measures were assessed using absolute performance, the ratio of own performance to others' performance, and the difference between others' and own performance. The best measure is judged by its sensitivity to changes in health indicators. . The measure that best correlated to health indicators was absolute presenteeism. For example, in the health sector, correlations between physical health status and absolute presenteeism were 4 to 5 times greater than the ratio or difference approaches, and in the education sector, these correlations were twice as large. Using this approach, the estimated cost of presenteeism in 2006 was $Aus8338 and $Aus8092 per worker per annum for the health and education sectors, respectively. . The HPQ is a valid measure of presenteeism. Transforming responses by perceived performance of peers is unnecessary as absolute presenteeism correlated best with health indicators. Absolute presenteeism was more insightful for ascertaining the cost of presenteeism.

  5. Analyzing self-controlled case series data when case confirmation rates are estimated from an internal validation sample.

    PubMed

    Xu, Stanley; Clarke, Christina L; Newcomer, Sophia R; Daley, Matthew F; Glanz, Jason M

    2018-05-16

    Vaccine safety studies are often electronic health record (EHR)-based observational studies. These studies often face significant methodological challenges, including confounding and misclassification of adverse event. Vaccine safety researchers use self-controlled case series (SCCS) study design to handle confounding effect and employ medical chart review to ascertain cases that are identified using EHR data. However, for common adverse events, limited resources often make it impossible to adjudicate all adverse events observed in electronic data. In this paper, we considered four approaches for analyzing SCCS data with confirmation rates estimated from an internal validation sample: (1) observed cases, (2) confirmed cases only, (3) known confirmation rate, and (4) multiple imputation (MI). We conducted a simulation study to evaluate these four approaches using type I error rates, percent bias, and empirical power. Our simulation results suggest that when misclassification of adverse events is present, approaches such as observed cases, confirmed case only, and known confirmation rate may inflate the type I error, yield biased point estimates, and affect statistical power. The multiple imputation approach considers the uncertainty of estimated confirmation rates from an internal validation sample, yields a proper type I error rate, largely unbiased point estimate, proper variance estimate, and statistical power. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Validity and feasibility of a satellite imagery-based method for rapid estimation of displaced populations

    PubMed Central

    2013-01-01

    Background Estimating the size of forcibly displaced populations is key to documenting their plight and allocating sufficient resources to their assistance, but is often not done, particularly during the acute phase of displacement, due to methodological challenges and inaccessibility. In this study, we explored the potential use of very high resolution satellite imagery to remotely estimate forcibly displaced populations. Methods Our method consisted of multiplying (i) manual counts of assumed residential structures on a satellite image and (ii) estimates of the mean number of people per structure (structure occupancy) obtained from publicly available reports. We computed population estimates for 11 sites in Bangladesh, Chad, Democratic Republic of Congo, Ethiopia, Haiti, Kenya and Mozambique (six refugee camps, three internally displaced persons’ camps and two urban neighbourhoods with a mixture of residents and displaced) ranging in population from 1,969 to 90,547, and compared these to “gold standard” reference population figures from census or other robust methods. Results Structure counts by independent analysts were reasonably consistent. Between one and 11 occupancy reports were available per site and most of these reported people per household rather than per structure. The imagery-based method had a precision relative to reference population figures of <10% in four sites and 10–30% in three sites, but severely over-estimated the population in an Ethiopian camp with implausible occupancy data and two post-earthquake Haiti sites featuring dense and complex residential layout. For each site, estimates were produced in 2–5 working person-days. Conclusions In settings with clearly distinguishable individual structures, the remote, imagery-based method had reasonable accuracy for the purposes of rapid estimation, was simple and quick to implement, and would likely perform better in more current application. However, it may have insurmountable

  7. Validity and feasibility of a satellite imagery-based method for rapid estimation of displaced populations.

    PubMed

    Checchi, Francesco; Stewart, Barclay T; Palmer, Jennifer J; Grundy, Chris

    2013-01-23

    Estimating the size of forcibly displaced populations is key to documenting their plight and allocating sufficient resources to their assistance, but is often not done, particularly during the acute phase of displacement, due to methodological challenges and inaccessibility. In this study, we explored the potential use of very high resolution satellite imagery to remotely estimate forcibly displaced populations. Our method consisted of multiplying (i) manual counts of assumed residential structures on a satellite image and (ii) estimates of the mean number of people per structure (structure occupancy) obtained from publicly available reports. We computed population estimates for 11 sites in Bangladesh, Chad, Democratic Republic of Congo, Ethiopia, Haiti, Kenya and Mozambique (six refugee camps, three internally displaced persons' camps and two urban neighbourhoods with a mixture of residents and displaced) ranging in population from 1,969 to 90,547, and compared these to "gold standard" reference population figures from census or other robust methods. Structure counts by independent analysts were reasonably consistent. Between one and 11 occupancy reports were available per site and most of these reported people per household rather than per structure. The imagery-based method had a precision relative to reference population figures of <10% in four sites and 10-30% in three sites, but severely over-estimated the population in an Ethiopian camp with implausible occupancy data and two post-earthquake Haiti sites featuring dense and complex residential layout. For each site, estimates were produced in 2-5 working person-days. In settings with clearly distinguishable individual structures, the remote, imagery-based method had reasonable accuracy for the purposes of rapid estimation, was simple and quick to implement, and would likely perform better in more current application. However, it may have insurmountable limitations in settings featuring connected buildings or

  8. Validation techniques for fault emulation of SRAM-based FPGAs

    DOE PAGES

    Quinn, Heather; Wirthlin, Michael

    2015-08-07

    A variety of fault emulation systems have been created to study the effect of single-event effects (SEEs) in static random access memory (SRAM) based field-programmable gate arrays (FPGAs). These systems are useful for augmenting radiation-hardness assurance (RHA) methodologies for verifying the effectiveness for mitigation techniques; understanding error signatures and failure modes in FPGAs; and failure rate estimation. For radiation effects researchers, it is important that these systems properly emulate how SEEs manifest in FPGAs. If the fault emulation systems does not mimic the radiation environment, the system will generate erroneous data and incorrect predictions of behavior of the FPGA inmore » a radiation environment. Validation determines whether the emulated faults are reasonable analogs to the radiation-induced faults. In this study we present methods for validating fault emulation systems and provide several examples of validated FPGA fault emulation systems.« less

  9. Population-based absolute risk estimation with survey data

    PubMed Central

    Kovalchik, Stephanie A.; Pfeiffer, Ruth M.

    2013-01-01

    Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614

  10. A neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine

    NASA Astrophysics Data System (ADS)

    Guo, T. H.; Musgrave, J.

    1992-11-01

    In order to properly utilize the available fuel and oxidizer of a liquid propellant rocket engine, the mixture ratio is closed loop controlled during main stage (65 percent - 109 percent power) operation. However, because of the lack of flight-capable instrumentation for measuring mixture ratio, the value of mixture ratio in the control loop is estimated using available sensor measurements such as the combustion chamber pressure and the volumetric flow, and the temperature and pressure at the exit duct on the low pressure fuel pump. This estimation scheme has two limitations. First, the estimation formula is based on an empirical curve fitting which is accurate only within a narrow operating range. Second, the mixture ratio estimate relies on a few sensor measurements and loss of any of these measurements will make the estimate invalid. In this paper, we propose a neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine. The estimator is an extension of a previously developed neural network based sensor failure detection and recovery algorithm (sensor validation). This neural network uses an auto associative structure which utilizes the redundant information of dissimilar sensors to detect inconsistent measurements. Two approaches have been identified for synthesizing mixture ratio from measurement data using a neural network. The first approach uses an auto associative neural network for sensor validation which is modified to include the mixture ratio as an additional output. The second uses a new network for the mixture ratio estimation in addition to the sensor validation network. Although mixture ratio is not directly measured in flight, it is generally available in simulation and in test bed firing data from facility measurements of fuel and oxidizer volumetric flows. The pros and cons of these two approaches will be discussed in terms of robustness to sensor failures and accuracy of the estimate during typical transients using

  11. A neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine

    NASA Technical Reports Server (NTRS)

    Guo, T. H.; Musgrave, J.

    1992-01-01

    In order to properly utilize the available fuel and oxidizer of a liquid propellant rocket engine, the mixture ratio is closed loop controlled during main stage (65 percent - 109 percent power) operation. However, because of the lack of flight-capable instrumentation for measuring mixture ratio, the value of mixture ratio in the control loop is estimated using available sensor measurements such as the combustion chamber pressure and the volumetric flow, and the temperature and pressure at the exit duct on the low pressure fuel pump. This estimation scheme has two limitations. First, the estimation formula is based on an empirical curve fitting which is accurate only within a narrow operating range. Second, the mixture ratio estimate relies on a few sensor measurements and loss of any of these measurements will make the estimate invalid. In this paper, we propose a neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine. The estimator is an extension of a previously developed neural network based sensor failure detection and recovery algorithm (sensor validation). This neural network uses an auto associative structure which utilizes the redundant information of dissimilar sensors to detect inconsistent measurements. Two approaches have been identified for synthesizing mixture ratio from measurement data using a neural network. The first approach uses an auto associative neural network for sensor validation which is modified to include the mixture ratio as an additional output. The second uses a new network for the mixture ratio estimation in addition to the sensor validation network. Although mixture ratio is not directly measured in flight, it is generally available in simulation and in test bed firing data from facility measurements of fuel and oxidizer volumetric flows. The pros and cons of these two approaches will be discussed in terms of robustness to sensor failures and accuracy of the estimate during typical transients using

  12. Validity of the Remote Food Photography Method (RFPM) for estimating energy and nutrient intake in near real-time.

    PubMed

    Martin, Corby K; Correa, John B; Han, Hongmei; Allen, H Raymond; Rood, Jennifer C; Champagne, Catherine M; Gunturk, Bahadir K; Bray, George A

    2012-04-01

    Two studies are reported; a pilot study to demonstrate feasibility followed by a larger validity study. Study 1's objective was to test the effect of two ecological momentary assessment (EMA) approaches that varied in intensity on the validity/accuracy of estimating energy intake (EI) with the Remote Food Photography Method (RFPM) over 6 days in free-living conditions. When using the RFPM, Smartphones are used to capture images of food selection and plate waste and to send the images to a server for food intake estimation. Consistent with EMA, prompts are sent to the Smartphones reminding participants to capture food images. During Study 1, EI estimated with the RFPM and the gold standard, doubly labeled water (DLW), were compared. Participants were assigned to receive Standard EMA Prompts (n = 24) or Customized Prompts (n = 16) (the latter received more reminders delivered at personalized meal times). The RFPM differed significantly from DLW at estimating EI when Standard (mean ± s.d. = -895 ± 770 kcal/day, P < 0.0001), but not Customized Prompts (-270 ± 748 kcal/day, P = 0.22) were used. Error (EI from the RFPM minus that from DLW) was significantly smaller with Customized vs. Standard Prompts. The objectives of Study 2 included testing the RFPM's ability to accurately estimate EI in free-living adults (N = 50) over 6 days, and energy and nutrient intake in laboratory-based meals. The RFPM did not differ significantly from DLW at estimating free-living EI (-152 ± 694 kcal/day, P = 0.16). During laboratory-based meals, estimating energy and macronutrient intake with the RFPM did not differ significantly compared to directly weighed intake.

  13. Natural Forest Biomass Estimation Based on Plantation Information Using PALSAR Data

    PubMed Central

    Avtar, Ram; Suzuki, Rikie; Sawada, Haruo

    2014-01-01

    Forests play a vital role in terrestrial carbon cycling; therefore, monitoring forest biomass at local to global scales has become a challenging issue in the context of climate change. In this study, we investigated the backscattering properties of Advanced Land Observing Satellite (ALOS) Phased Array L-band Synthetic Aperture Radar (PALSAR) data in cashew and rubber plantation areas of Cambodia. The PALSAR backscattering coefficient (σ0) had different responses in the two plantation types because of differences in biophysical parameters. The PALSAR σ0 showed a higher correlation with field-based measurements and lower saturation in cashew plants compared with rubber plants. Multiple linear regression (MLR) models based on field-based biomass of cashew (C-MLR) and rubber (R-MLR) plants with PALSAR σ0 were created. These MLR models were used to estimate natural forest biomass in Cambodia. The cashew plant-based MLR model (C-MLR) produced better results than the rubber plant-based MLR model (R-MLR). The C-MLR-estimated natural forest biomass was validated using forest inventory data for natural forests in Cambodia. The validation results showed a strong correlation (R2 = 0.64) between C-MLR-estimated natural forest biomass and field-based biomass, with RMSE  = 23.2 Mg/ha in deciduous forests. In high-biomass regions, such as dense evergreen forests, this model had a weaker correlation because of the high biomass and the multiple-story tree structure of evergreen forests, which caused saturation of the PALSAR signal. PMID:24465908

  14. A phase match based frequency estimation method for sinusoidal signals

    NASA Astrophysics Data System (ADS)

    Shen, Yan-Lin; Tu, Ya-Qing; Chen, Lin-Jun; Shen, Ting-Ao

    2015-04-01

    Accurate frequency estimation affects the ranging precision of linear frequency modulated continuous wave (LFMCW) radars significantly. To improve the ranging precision of LFMCW radars, a phase match based frequency estimation method is proposed. To obtain frequency estimation, linear prediction property, autocorrelation, and cross correlation of sinusoidal signals are utilized. The analysis of computational complex shows that the computational load of the proposed method is smaller than those of two-stage autocorrelation (TSA) and maximum likelihood. Simulations and field experiments are performed to validate the proposed method, and the results demonstrate the proposed method has better performance in terms of frequency estimation precision than methods of Pisarenko harmonic decomposition, modified covariance, and TSA, which contribute to improving the precision of LFMCW radars effectively.

  15. Validity of Bioelectrical Impedance Analysis to Estimation Fat-Free Mass in the Army Cadets.

    PubMed

    Langer, Raquel D; Borges, Juliano H; Pascoa, Mauro A; Cirolini, Vagner X; Guerra-Júnior, Gil; Gonçalves, Ezequiel M

    2016-03-11

    Bioelectrical Impedance Analysis (BIA) is a fast, practical, non-invasive, and frequently used method for fat-free mass (FFM) estimation. The aims of this study were to validate predictive equations of BIA to FFM estimation in Army cadets and to develop and validate a specific BIA equation for this population. A total of 396 males, Brazilian Army cadets, aged 17-24 years were included. The study used eight published predictive BIA equations, a specific equation in FFM estimation, and dual-energy X-ray absorptiometry (DXA) as a reference method. Student's t-test (for paired sample), linear regression analysis, and Bland-Altman method were used to test the validity of the BIA equations. Predictive BIA equations showed significant differences in FFM compared to DXA (p < 0.05) and large limits of agreement by Bland-Altman. Predictive BIA equations explained 68% to 88% of FFM variance. Specific BIA equations showed no significant differences in FFM, compared to DXA values. Published BIA predictive equations showed poor accuracy in this sample. The specific BIA equations, developed in this study, demonstrated validity for this sample, although should be used with caution in samples with a large range of FFM.

  16. Validation of a spectrophotometer-based method for estimating daily sperm production and deferent duct transit.

    PubMed

    Froman, D P; Rhoads, D D

    2012-10-01

    The objectives of the present work were 3-fold. First, a new method for estimating daily sperm production was validated. This method, in turn, was used to evaluate testis output as well as deferent duct throughput. Next, this analytical approach was evaluated in 2 experiments. The first experiment compared left and right reproductive tracts within roosters. The second experiment compared reproductive tract throughput in roosters from low and high sperm mobility lines. Standard curves were constructed from which unknown concentrations of sperm cells and sperm nuclei could be predicted from observed absorbance. In each case, the independent variable was based upon hemacytometer counts, and absorbance was a linear function of concentration. Reproductive tracts were excised, semen recovered from each duct, and the extragonadal sperm reserve determined by multiplying volume by sperm cell concentration. Testicular sperm nuclei were procured by homogenization of a whole testis, overlaying a 20-mL volume of homogenate upon 15% (wt/vol) Accudenz (Accurate Chemical and Scientific Corporation, Westbury, NY), and then washing nuclei by centrifugation through the Accudenz layer. Daily sperm production was determined by dividing the predicted number of sperm nuclei within the homogenate by 4.5 d (i.e., the time sperm with elongated nuclei spend within the testis). Sperm transit through the deferent duct was estimated by dividing the extragonadal reserve by daily sperm production. Neither the efficiency of sperm production (sperm per gram of testicular parenchyma per day) nor deferent duct transit differed between left and right reproductive tracts (P > 0.05). Whereas efficiency of sperm production did not differ (P > 0.05) between low and high sperm mobility lines, deferent duct transit differed between lines (P < 0.001). On average, this process required 2.2 and 1.0 d for low and high lines, respectively. In summary, we developed and then tested a method for quantifying male

  17. Comparison of anthropometric-based equations for estimation of body fat percentage in a normal-weight and overweight female cohort: validation via air-displacement plethysmography.

    PubMed

    Temple, Derry; Denis, Romain; Walsh, Marianne C; Dicker, Patrick; Byrne, Annette T

    2015-02-01

    To evaluate the accuracy of the most commonly used anthropometric-based equations in the estimation of percentage body fat (%BF) in both normal-weight and overweight women using air-displacement plethysmography (ADP) as the criterion measure. A comparative study in which the equations of Durnin and Womersley (1974; DW) and Jackson, Pollock and Ward (1980) at three, four and seven sites (JPW₃, JPW₄ and JPW₇) were validated against ADP in three groups. Group 1 included all participants, group 2 included participants with a BMI <25·0 kg/m² and group 3 included participants with a BMI ≥25·0 kg/m². Human Performance Laboratory, Institute for Sport and Health, University College Dublin, Republic of Ireland. Forty-three female participants aged between 18 and 55 years. In all three groups, the %BF values estimated from the DW equation were closer to the criterion measure (i.e. ADP) than those estimated from the other equations. Of the three JPW equations, JPW₃ provided the most accurate estimation of %BF when compared with ADP in all three groups. In comparison to ADP, these findings suggest that the DW equation is the most accurate anthropometric method for the estimation of %BF in both normal-weight and overweight females.

  18. An intercomparison and validation of satellite-based surface radiative energy flux estimates over the Arctic

    NASA Astrophysics Data System (ADS)

    Riihelä, Aku; Key, Jeffrey R.; Meirink, Jan Fokke; Kuipers Munneke, Peter; Palo, Timo; Karlsson, Karl-Göran

    2017-05-01

    Accurate determination of radiative energy fluxes over the Arctic is of crucial importance for understanding atmosphere-surface interactions, melt and refreezing cycles of the snow and ice cover, and the role of the Arctic in the global energy budget. Satellite-based estimates can provide comprehensive spatiotemporal coverage, but the accuracy and comparability of the existing data sets must be ascertained to facilitate their use. Here we compare radiative flux estimates from Clouds and the Earth's Radiant Energy System (CERES) Synoptic 1-degree (SYN1deg)/Energy Balanced and Filled, Global Energy and Water Cycle Experiment (GEWEX) surface energy budget, and our own experimental FluxNet / Satellite Application Facility on Climate Monitoring cLoud, Albedo and RAdiation (CLARA) data against in situ observations over Arctic sea ice and the Greenland Ice Sheet during summer of 2007. In general, CERES SYN1deg flux estimates agree best with in situ measurements, although with two particular limitations: (1) over sea ice the upwelling shortwave flux in CERES SYN1deg appears to be underestimated because of an underestimated surface albedo and (2) the CERES SYN1deg upwelling longwave flux over sea ice saturates during midsummer. The Advanced Very High Resolution Radiometer-based GEWEX and FluxNet-CLARA flux estimates generally show a larger range in retrieval errors relative to CERES, with contrasting tendencies relative to each other. The largest source of retrieval error in the FluxNet-CLARA downwelling shortwave flux is shown to be an overestimated cloud optical thickness. The results illustrate that satellite-based flux estimates over the Arctic are not yet homogeneous and that further efforts are necessary to investigate the differences in the surface and cloud properties which lead to disagreements in flux retrievals.

  19. Two-Tiered Violence Risk Estimates: a validation study of an integrated-actuarial risk assessment instrument.

    PubMed

    Mills, Jeremy F; Gray, Andrew L

    2013-11-01

    This study is an initial validation study of the Two-Tiered Violence Risk Estimates instrument (TTV), a violence risk appraisal instrument designed to support an integrated-actuarial approach to violence risk assessment. The TTV was scored retrospectively from file information on a sample of violent offenders. Construct validity was examined by comparing the TTV with instruments that have shown utility to predict violence that were prospectively scored: The Historical-Clinical-Risk Management-20 (HCR-20) and Lifestyle Criminality Screening Form (LCSF). Predictive validity was examined through a long-term follow-up of 12.4 years with a sample of 78 incarcerated offenders. Results show the TTV to be highly correlated with the HCR-20 and LCSF. The base rate for violence over the follow-up period was 47.4%, and the TTV was equally predictive of violent recidivism relative to the HCR-20 and LCSF. Discussion centers on the advantages of an integrated-actuarial approach to the assessment of violence risk.

  20. Temporal regularization of ultrasound-based liver motion estimation for image-guided radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Shea, Tuathan P., E-mail: tuathan.oshea@icr.ac.uk; Bamber, Jeffrey C.; Harris, Emma J.

    Purpose: Ultrasound-based motion estimation is an expanding subfield of image-guided radiation therapy. Although ultrasound can detect tissue motion that is a fraction of a millimeter, its accuracy is variable. For controlling linear accelerator tracking and gating, ultrasound motion estimates must remain highly accurate throughout the imaging sequence. This study presents a temporal regularization method for correlation-based template matching which aims to improve the accuracy of motion estimates. Methods: Liver ultrasound sequences (15–23 Hz imaging rate, 2.5–5.5 min length) from ten healthy volunteers under free breathing were used. Anatomical features (blood vessels) in each sequence were manually annotated for comparison withmore » normalized cross-correlation based template matching. Five sequences from a Siemens Acuson™ scanner were used for algorithm development (training set). Results from incremental tracking (IT) were compared with a temporal regularization method, which included a highly specific similarity metric and state observer, known as the α–β filter/similarity threshold (ABST). A further five sequences from an Elekta Clarity™ system were used for validation, without alteration of the tracking algorithm (validation set). Results: Overall, the ABST method produced marked improvements in vessel tracking accuracy. For the training set, the mean and 95th percentile (95%) errors (defined as the difference from manual annotations) were 1.6 and 1.4 mm, respectively (compared to 6.2 and 9.1 mm, respectively, for IT). For each sequence, the use of the state observer leads to improvement in the 95% error. For the validation set, the mean and 95% errors for the ABST method were 0.8 and 1.5 mm, respectively. Conclusions: Ultrasound-based motion estimation has potential to monitor liver translation over long time periods with high accuracy. Nonrigid motion (strain) and the quality of the ultrasound data are likely to have an impact on tracking

  1. Bias-adjusted satellite-based rainfall estimates for predicting floods: Narayani Basin

    USGS Publications Warehouse

    Artan, Guleid A.; Tokar, S.A.; Gautam, D.K.; Bajracharya, S.R.; Shrestha, M.S.

    2011-01-01

    In Nepal, as the spatial distribution of rain gauges is not sufficient to provide detailed perspective on the highly varied spatial nature of rainfall, satellite-based rainfall estimates provides the opportunity for timely estimation. This paper presents the flood prediction of Narayani Basin at the Devghat hydrometric station (32 000 km2) using bias-adjusted satellite rainfall estimates and the Geospatial Stream Flow Model (GeoSFM), a spatially distributed, physically based hydrologic model. The GeoSFM with gridded gauge observed rainfall inputs using kriging interpolation from 2003 was used for calibration and 2004 for validation to simulate stream flow with both having a Nash Sutcliff Efficiency of above 0.7. With the National Oceanic and Atmospheric Administration Climate Prediction Centre's rainfall estimates (CPC_RFE2.0), using the same calibrated parameters, for 2003 the model performance deteriorated but improved after recalibration with CPC_RFE2.0 indicating the need to recalibrate the model with satellite-based rainfall estimates. Adjusting the CPC_RFE2.0 by a seasonal, monthly and 7-day moving average ratio, improvement in model performance was achieved. Furthermore, a new gauge-satellite merged rainfall estimates obtained from ingestion of local rain gauge data resulted in significant improvement in flood predictability. The results indicate the applicability of satellite-based rainfall estimates in flood prediction with appropriate bias correction.

  2. Validity of the Remote Food Photography Method (RFPM) for estimating energy and nutrient intake in near real-time

    PubMed Central

    Martin, C. K.; Correa, J. B.; Han, H.; Allen, H. R.; Rood, J.; Champagne, C. M.; Gunturk, B. K.; Bray, G. A.

    2014-01-01

    Two studies are reported; a pilot study to demonstrate feasibility followed by a larger validity study. Study 1’s objective was to test the effect of two ecological momentary assessment (EMA) approaches that varied in intensity on the validity/accuracy of estimating energy intake with the Remote Food Photography Method (RFPM) over six days in free-living conditions. When using the RFPM, Smartphones are used to capture images of food selection and plate waste and to send the images to a server for food intake estimation. Consistent with EMA, prompts are sent to the Smartphones reminding participants to capture food images. During Study 1, energy intake estimated with the RFPM and the gold standard, doubly labeled water (DLW), were compared. Participants were assigned to receive Standard EMA Prompts (n=24) or Customized Prompts (n=16) (the latter received more reminders delivered at personalized meal times). The RFPM differed significantly from DLW at estimating energy intake when Standard (mean±SD = −895±770 kcal/day, p<.0001), but not Customized Prompts (−270±748 kcal/day, p=.22) were used. Error (energy intake from the RFPM minus that from DLW) was significantly smaller with Customized vs. Standard Prompts. The objectives of Study 2 included testing the RFPM’s ability to accurately estimate energy intake in free-living adults (N=50) over six days, and energy and nutrient intake in laboratory-based meals. The RFPM did not differ significantly from DLW at estimating free-living energy intake (−152±694 kcal/day, p=0.16). During laboratory-based meals, estimating energy and macronutrient intake with the RFPM did not differ significantly compared to directly weighed intake. PMID:22134199

  3. Criterion-Related Validity of Sit-and-Reach Tests for Estimating Hamstring and Lumbar Extensibility: a Meta-Analysis

    PubMed Central

    Mayorga-Vega, Daniel; Merino-Marban, Rafael; Viciana, Jesús

    2014-01-01

    The main purpose of the present meta-analysis was to examine the scientific literature on the criterion-related validity of sit-and-reach tests for estimating hamstring and lumbar extensibility. For this purpose relevant studies were searched from seven electronic databases dated up through December 2012. Primary outcomes of criterion-related validity were Pearson´s zero-order correlation coefficients (r) between sit-and-reach tests and hamstrings and/or lumbar extensibility criterion measures. Then, from the included studies, the Hunter- Schmidt´s psychometric meta-analysis approach was conducted to estimate population criterion- related validity of sit-and-reach tests. Firstly, the corrected correlation mean (rp), unaffected by statistical artefacts (i.e., sampling error and measurement error), was calculated separately for each sit-and-reach test. Subsequently, the three potential moderator variables (sex of participants, age of participants, and level of hamstring extensibility) were examined by a partially hierarchical analysis. Of the 34 studies included in the present meta-analysis, 99 correlations values across eight sit-and-reach tests and 51 across seven sit-and-reach tests were retrieved for hamstring and lumbar extensibility, respectively. The overall results showed that all sit-and-reach tests had a moderate mean criterion-related validity for estimating hamstring extensibility (rp = 0.46-0.67), but they had a low mean for estimating lumbar extensibility (rp = 0. 16-0.35). Generally, females, adults and participants with high levels of hamstring extensibility tended to have greater mean values of criterion-related validity for estimating hamstring extensibility. When the use of angular tests is limited such as in a school setting or in large scale studies, scientists and practitioners could use the sit-and-reach tests as a useful alternative for hamstring extensibility estimation, but not for estimating lumbar extensibility. Key Points Overall sit

  4. Validation of one-mile walk equations for the estimation of aerobic fitness in British military personnel under the age of 40 years.

    PubMed

    Lunt, Heather; Roiz De Sa, Daniel; Roiz De Sa, Julia; Allsopp, Adrian

    2013-07-01

    To provide an accurate estimate of peak oxygen uptake (VO2 peak) for British Royal Navy Personnel aged between 18 and 39, comparing a gold standard treadmill based maximal exercise test with a submaximal one-mile walk test. Two hundred military personnel consented to perform a treadmill-based VO2 peak test and two one-mile walk tests round an athletics track. The estimated VO2 peak values from three different one-mile walk equations were compared to directly measured VO2 peak values from the treadmill-based test. One hundred participants formed a validation group from which a new equation was derived and the other 100 participants formed the cross-validation group. Existing equations underestimated the VO2 peak values of the fittest personnel and overestimated the VO2 peak of the least aerobically fit by between 2% and 18%. The new equation derived from the validation group has less bias, the highest correlation with the measured values (r = 0.83), and classified the most people correctly according to the Royal Navy's Fitness Test standards, producing the fewest false positives and false negatives combined (9%). The new equation will provide a more accurate estimate of VO2 peak for a British military population aged 18 to 39. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.

  5. Calibration and validation of TRUST MRI for the estimation of cerebral blood oxygenation

    PubMed Central

    Lu, Hanzhang; Xu, Feng; Grgac, Ksenija; Liu, Peiying; Qin, Qin; van Zijl, Peter

    2011-01-01

    Recently, a T2-Relaxation-Under-Spin-Tagging (TRUST) MRI technique was developed to quantitatively estimate blood oxygen saturation fraction (Y) via the measurement of pure blood T2. This technique has shown promise for normalization of fMRI signals, for the assessment of oxygen metabolism, and in studies of cognitive aging and multiple sclerosis. However, a human validation study has not been conducted. In addition, the calibration curve used to convert blood T2 to Y has not accounted for the effects of hematocrit (Hct). In the present study, we first conducted experiments on blood samples under physiologic conditions, and the Carr-Purcell-Meiboom-Gill (CPMG) T2 was determined for a range of Y and Hct values. The data were fitted to a two-compartment exchange model to allow the characterization of a three-dimensional plot that can serve to calibrate the in vivo data. Next, in a validation study in humans, we showed that arterial Y estimated using TRUST MRI was 0.837±0.036 (N=7) during the inhalation of 14% O2, which was in excellent agreement with the gold-standard Y values of 0.840±0.036 based on Pulse-Oximetry. These data suggest that the availability of this calibration plot should enhance the applicability of TRUST MRI for non-invasive assessment of cerebral blood oxygenation. PMID:21590721

  6. Approximate Confidence Intervals for Moment-Based Estimators of the Between-Study Variance in Random Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Jackson, Dan; Bowden, Jack; Baker, Rose

    2015-01-01

    Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment…

  7. Supersensitive ancilla-based adaptive quantum phase estimation

    NASA Astrophysics Data System (ADS)

    Larson, Walker; Saleh, Bahaa E. A.

    2017-10-01

    The supersensitivity attained in quantum phase estimation is known to be compromised in the presence of decoherence. This is particularly patent at blind spots—phase values at which sensitivity is totally lost. One remedy is to use a precisely known reference phase to shift the operation point to a less vulnerable phase value. Since this is not always feasible, we present here an alternative approach based on combining the probe with an ancillary degree of freedom containing adjustable parameters to create an entangled quantum state of higher dimension. We validate this concept by simulating a configuration of a Mach-Zehnder interferometer with a two-photon probe and a polarization ancilla of adjustable parameters, entangled at a polarizing beam splitter. At the interferometer output, the photons are measured after an adjustable unitary transformation in the polarization subspace. Through calculation of the Fisher information and simulation of an estimation procedure, we show that optimizing the adjustable polarization parameters using an adaptive measurement process provides globally supersensitive unbiased phase estimates for a range of decoherence levels, without prior information or a reference phase.

  8. A Novel Rules Based Approach for Estimating Software Birthmark

    PubMed Central

    Binti Alias, Norma; Anwar, Sajid

    2015-01-01

    Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rule based technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark. PMID:25945363

  9. Quantification of construction waste prevented by BIM-based design validation: Case studies in South Korea.

    PubMed

    Won, Jongsung; Cheng, Jack C P; Lee, Ghang

    2016-03-01

    Waste generated in construction and demolition processes comprised around 50% of the solid waste in South Korea in 2013. Many cases show that design validation based on building information modeling (BIM) is an effective means to reduce the amount of construction waste since construction waste is mainly generated due to improper design and unexpected changes in the design and construction phases. However, the amount of construction waste that could be avoided by adopting BIM-based design validation has been unknown. This paper aims to estimate the amount of construction waste prevented by a BIM-based design validation process based on the amount of construction waste that might be generated due to design errors. Two project cases in South Korea were studied in this paper, with 381 and 136 design errors detected, respectively during the BIM-based design validation. Each design error was categorized according to its cause and the likelihood of detection before construction. The case studies show that BIM-based design validation could prevent 4.3-15.2% of construction waste that might have been generated without using BIM. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals

    PubMed Central

    Zhao, Ziyue; Liu, Congfeng

    2014-01-01

    In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method. PMID:27382610

  11. Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals.

    PubMed

    Zhao, Ziyue; Liu, Congfeng

    2014-01-01

    In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method.

  12. Double Cross-Validation in Multiple Regression: A Method of Estimating the Stability of Results.

    ERIC Educational Resources Information Center

    Rowell, R. Kevin

    In multiple regression analysis, where resulting predictive equation effectiveness is subject to shrinkage, it is especially important to evaluate result replicability. Double cross-validation is an empirical method by which an estimate of invariance or stability can be obtained from research data. A procedure for double cross-validation is…

  13. Toward On-line Parameter Estimation of Concentric Tube Robots Using a Mechanics-based Kinematic Model

    PubMed Central

    Jang, Cheongjae; Ha, Junhyoung; Dupont, Pierre E.; Park, Frank Chongwoo

    2017-01-01

    Although existing mechanics-based models of concentric tube robots have been experimentally demonstrated to approximate the actual kinematics, determining accurate estimates of model parameters remains difficult due to the complex relationship between the parameters and available measurements. Further, because the mechanics-based models neglect some phenomena like friction, nonlinear elasticity, and cross section deformation, it is also not clear if model error is due to model simplification or to parameter estimation errors. The parameters of the superelastic materials used in these robots can be slowly time-varying, necessitating periodic re-estimation. This paper proposes a method for estimating the mechanics-based model parameters using an extended Kalman filter as a step toward on-line parameter estimation. Our methodology is validated through both simulation and experiments. PMID:28717554

  14. Development and validation of a two-dimensional fast-response flood estimation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judi, David R; Mcpherson, Timothy N; Burian, Steven J

    2009-01-01

    A finite difference formulation of the shallow water equations using an upwind differencing method was developed maintaining computational efficiency and accuracy such that it can be used as a fast-response flood estimation tool. The model was validated using both laboratory controlled experiments and an actual dam breach. Through the laboratory experiments, the model was shown to give good estimations of depth and velocity when compared to the measured data, as well as when compared to a more complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. Themore » simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies show that a relatively numerical scheme used to solve the complete shallow water equations can be used to accurately estimate flood inundation. Future work will focus on further reducing the computation time needed to provide flood inundation estimates for fast-response analyses. This will be accomplished through the efficient use of multi-core, multi-processor computers coupled with an efficient domain-tracking algorithm, as well as an understanding of the impacts of grid resolution on model results.« less

  15. Tumor response estimation in radar-based microwave breast cancer detection.

    PubMed

    Kurrant, Douglas J; Fear, Elise C; Westwick, David T

    2008-12-01

    Radar-based microwave imaging techniques have been proposed for early stage breast cancer detection. A considerable challenge for the successful implementation of these techniques is the reduction of clutter, or components of the signal originating from objects other than the tumor. In particular, the reduction of clutter from the late-time scattered fields is required in order to detect small (subcentimeter diameter) tumors. In this paper, a method to estimate the tumor response contained in the late-time scattered fields is presented. The method uses a parametric function to model the tumor response. A maximum a posteriori estimation approach is used to evaluate the optimal values for the estimates of the parameters. A pattern classification technique is then used to validate the estimation. The ability of the algorithm to estimate a tumor response is demonstrated by using both experimental and simulated data obtained with a tissue sensing adaptive radar system.

  16. The Novel Nonlinear Adaptive Doppler Shift Estimation Technique and the Coherent Doppler Lidar System Validation Lidar

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Koch, Grady J.

    2006-01-01

    The signal processing aspect of a 2-m wavelength coherent Doppler lidar system under development at NASA Langley Research Center in Virginia is investigated in this paper. The lidar system is named VALIDAR (validation lidar) and its signal processing program estimates and displays various wind parameters in real-time as data acquisition occurs. The goal is to improve the quality of the current estimates such as power, Doppler shift, wind speed, and wind direction, especially in low signal-to-noise-ratio (SNR) regime. A novel Nonlinear Adaptive Doppler Shift Estimation Technique (NADSET) is developed on such behalf and its performance is analyzed using the wind data acquired over a long period of time by VALIDAR. The quality of Doppler shift and power estimations by conventional Fourier-transform-based spectrum estimation methods deteriorates rapidly as SNR decreases. NADSET compensates such deterioration in the quality of wind parameter estimates by adaptively utilizing the statistics of Doppler shift estimate in a strong SNR range and identifying sporadic range bins where good Doppler shift estimates are found. The authenticity of NADSET is established by comparing the trend of wind parameters with and without NADSET applied to the long-period lidar return data.

  17. Bias in error estimation when using cross-validation for model selection.

    PubMed

    Varma, Sudhir; Simon, Richard

    2006-02-23

    Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data. We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these "null" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With "null" and "non null" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the "null" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of "null" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for

  18. Psychometric instrumentation: reliability and validity of instruments used for clinical practice, evidence-based practice projects and research studies.

    PubMed

    Mayo, Ann M

    2015-01-01

    It is important for CNSs and other APNs to consider the reliability and validity of instruments chosen for clinical practice, evidence-based practice projects, or research studies. Psychometric testing uses specific research methods to evaluate the amount of error associated with any particular instrument. Reliability estimates explain more about how well the instrument is designed, whereas validity estimates explain more about scores that are produced by the instrument. An instrument may be architecturally sound overall (reliable), but the same instrument may not be valid. For example, if a specific group does not understand certain well-constructed items, then the instrument does not produce valid scores when used with that group. Many instrument developers may conduct reliability testing only once, yet continue validity testing in different populations over many years. All CNSs should be advocating for the use of reliable instruments that produce valid results. Clinical nurse specialists may find themselves in situations where reliability and validity estimates for some instruments that are being utilized are unknown. In such cases, CNSs should engage key stakeholders to sponsor nursing researchers to pursue this most important work.

  19. Development and validation of satellite-based estimates of surface visibility

    NASA Astrophysics Data System (ADS)

    Brunner, J.; Pierce, R. B.; Lenzen, A.

    2016-02-01

    A satellite-based surface visibility retrieval has been developed using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements as a proxy for Advanced Baseline Imager (ABI) data from the next generation of Geostationary Operational Environmental Satellites (GOES-R). The retrieval uses a multiple linear regression approach to relate satellite aerosol optical depth, fog/low cloud probability and thickness retrievals, and meteorological variables from numerical weather prediction forecasts to National Weather Service Automated Surface Observing System (ASOS) surface visibility measurements. Validation using independent ASOS measurements shows that the GOES-R ABI surface visibility retrieval (V) has an overall success rate of 64.5 % for classifying clear (V ≥ 30 km), moderate (10 km ≤ V < 30 km), low (2 km ≤ V < 10 km), and poor (V < 2 km) visibilities and shows the most skill during June through September, when Heidke skill scores are between 0.2 and 0.4. We demonstrate that the aerosol (clear-sky) component of the GOES-R ABI visibility retrieval can be used to augment measurements from the United States Environmental Protection Agency (EPA) and National Park Service (NPS) Interagency Monitoring of Protected Visual Environments (IMPROVE) network and provide useful information to the regional planning offices responsible for developing mitigation strategies required under the EPA's Regional Haze Rule, particularly during regional haze events associated with smoke from wildfires.

  20. Development and validation of satellite based estimates of surface visibility

    NASA Astrophysics Data System (ADS)

    Brunner, J.; Pierce, R. B.; Lenzen, A.

    2015-10-01

    A satellite based surface visibility retrieval has been developed using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements as a proxy for Advanced Baseline Imager (ABI) data from the next generation of Geostationary Operational Environmental Satellites (GOES-R). The retrieval uses a multiple linear regression approach to relate satellite aerosol optical depth, fog/low cloud probability and thickness retrievals, and meteorological variables from numerical weather prediction forecasts to National Weather Service Automated Surface Observing System (ASOS) surface visibility measurements. Validation using independent ASOS measurements shows that the GOES-R ABI surface visibility retrieval (V) has an overall success rate of 64.5% for classifying Clear (V ≥ 30 km), Moderate (10 km ≤ V < 30 km), Low (2 km ≤ V < 10 km) and Poor (V < 2 km) visibilities and shows the most skill during June through September, when Heidke skill scores are between 0.2 and 0.4. We demonstrate that the aerosol (clear sky) component of the GOES-R ABI visibility retrieval can be used to augment measurements from the United States Environmental Protection Agency (EPA) and National Park Service (NPS) Interagency Monitoring of Protected Visual Environments (IMPROVE) network, and provide useful information to the regional planning offices responsible for developing mitigation strategies required under the EPA's Regional Haze Rule, particularly during regional haze events associated with smoke from wildfires.

  1. Validity of two wearable monitors to estimate breaks from sedentary time

    PubMed Central

    Lyden, Kate; Kozey-Keadle, Sarah L.; Staudenmayer, John W.; Freedson, Patty S.

    2012-01-01

    Investigations employing wearable monitors have begun to examine how sedentary time behaviors influence health. Purpose To demonstrate the utility of a measure of sedentary behavior and to validate the activPAL and ActiGraph GT3X for estimating measures of sedentary behavior: absolute number of breaks and break-rate. Methods Thirteen participants completed two, 10-hour conditions. During the baseline condition, participants performed normal daily activity and during the treatment condition, participants were asked to reduce and break-up their sedentary time. In each condition, participants wore two ActiGraph GT3X monitors and one activPAL. The ActiGraph was tested using the low frequency extension filter (AG-LFE) and the normal filter (AG-Norm). For both ActiGraph monitors two count cut-points to estimate sedentary time were examined: 100 and 150 counts∙min−1. Direct observation served as the criterion measure of total sedentary time, absolute number of breaks from sedentary time and break-rate (number of breaks per sedentary hour [brks.sed-hr−1]). Results Break-rate was the only metric sensitive to changes in behavior between baseline (5.1 [3.3 to 6.8] brks.sed-hr−1) and treatment conditions (7.3 [4.7 to 9.8] brks.sed-hr−1) (mean [95% CI]). The activPAL produced valid estimates of all sedentary behavior measures and was sensitive to changes in break-rate between conditions (baseline: 5.1 [2.8 to 7.1] brks.sed-hr−1, treatment: 8.0 [5.8 to 10.2] brks.sed-hr−1). In general, the AG-LFE and AG-Norm were not accurate in estimating break-rate or absolute number of breaks and were not sensitive to changes between conditions. Conclusion This study demonstrates the utility of expressing breaks from sedentary time as a rate per sedentary hour, a metric specifically relevant to free-living behavior, and provides further evidence that the activPAL is a valid tool to measure components of sedentary behavior in free-living environments. PMID:22648343

  2. A Simulation Environment for Benchmarking Sensor Fusion-Based Pose Estimators.

    PubMed

    Ligorio, Gabriele; Sabatini, Angelo Maria

    2015-12-19

    In-depth analysis and performance evaluation of sensor fusion-based estimators may be critical when performed using real-world sensor data. For this reason, simulation is widely recognized as one of the most powerful tools for algorithm benchmarking. In this paper, we present a simulation framework suitable for assessing the performance of sensor fusion-based pose estimators. The systems used for implementing the framework were magnetic/inertial measurement units (MIMUs) and a camera, although the addition of further sensing modalities is straightforward. Typical nuisance factors were also included for each sensor. The proposed simulation environment was validated using real-life sensor data employed for motion tracking. The higher mismatch between real and simulated sensors was about 5% of the measured quantity (for the camera simulation), whereas a lower correlation was found for an axis of the gyroscope (0.90). In addition, a real benchmarking example of an extended Kalman filter for pose estimation from MIMU and camera data is presented.

  3. Validating fatty acid intake as estimated by an FFQ: how does the 24 h recall perform as reference method compared with the duplicate portion?

    PubMed

    Trijsburg, Laura; de Vries, Jeanne Hm; Hollman, Peter Ch; Hulshof, Paul Jm; van 't Veer, Pieter; Boshuizen, Hendriek C; Geelen, Anouk

    2018-05-08

    To compare the performance of the commonly used 24 h recall (24hR) with the more distinct duplicate portion (DP) as reference method for validation of fatty acid intake estimated with an FFQ. Intakes of SFA, MUFA, n-3 fatty acids and linoleic acid (LA) were estimated by chemical analysis of two DP and by on average five 24hR and two FFQ. Plasma n-3 fatty acids and LA were used to objectively compare ranking of individuals based on DP and 24hR. Multivariate measurement error models were used to estimate validity coefficients and attenuation factors for the FFQ with the DP and 24hR as reference methods. Wageningen, the Netherlands. Ninety-two men and 106 women (aged 20-70 years). Validity coefficients for the fatty acid estimates by the FFQ tended to be lower when using the DP as reference method compared with the 24hR. Attenuation factors for the FFQ tended to be slightly higher based on the DP than those based on the 24hR as reference method. Furthermore, when using plasma fatty acids as reference, the DP showed comparable to slightly better ranking of participants according to their intake of n-3 fatty acids (0·33) and n-3:LA (0·34) than the 24hR (0·22 and 0·24, respectively). The 24hR gives only slightly different results compared with the distinctive but less feasible DP, therefore use of the 24hR seems appropriate as the reference method for FFQ validation of fatty acid intake.

  4. Web-based Food Behaviour Questionnaire: validation with grades six to eight students.

    PubMed

    Hanning, Rhona M; Royall, Dawna; Toews, Jenn E; Blashill, Lindsay; Wegener, Jessica; Driezen, Pete

    2009-01-01

    The web-based Food Behaviour Questionnaire (FBQ) includes a 24-hour diet recall, a food frequency questionnaire, and questions addressing knowledge, attitudes, intentions, and food-related behaviours. The survey has been revised since it was developed and initially validated. The current study was designed to obtain qualitative feedback and to validate the FBQ diet recall. "Think aloud" techniques were used in cognitive interviews with dietitian experts (n=11) and grade six students (n=21). Multi-ethnic students (n=201) in grades six to eight at urban southern Ontario schools completed the FBQ and, subsequently, one-on-one diet recall interviews with trained dietitians. Food group and nutrient intakes were compared. Users provided positive feedback on the FBQ. Suggestions included adding more foods, more photos for portion estimation, and online student feedback. Energy and nutrient intakes were positively correlated between FBQ and dietitian interviews, overall and by gender and grade (all p<0.001). Intraclass correlation coefficients were ≥0.5 for energy and macro-nutrients, although the web-based survey underestimated energy (10.5%) and carbohydrate (-15.6%) intakes (p<0.05). Under-estimation of rice and pasta portions on the web accounted for 50% of this discrepancy. The FBQ is valid, relative to 24-hour recall interviews, for dietary assessment in diverse populations of Ontario children in grades six to eight.

  5. Validating Microwave-Based Satellite Rain Rate Retrievals Over TRMM Ground Validation Sites

    NASA Astrophysics Data System (ADS)

    Fisher, B. L.; Wolff, D. B.

    2008-12-01

    Multi-channel, passive microwave instruments are commonly used today to probe the structure of rain systems and to estimate surface rainfall from space. Until the advent of meteorological satellites and the development of remote sensing techniques for measuring precipitation from space, there was no observational system capable of providing accurate estimates of surface precipitation on global scales. Since the early 1970s, microwave measurements from satellites have provided quantitative estimates of surface rainfall by observing the emission and scattering processes due to the existence of clouds and precipitation in the atmosphere. This study assesses the relative performance of microwave precipitation estimates from seven polar-orbiting satellites and the TRMM TMI using four years (2003-2006) of instantaneous radar rain estimates obtained from Tropical Rainfall Measuring Mission (TRMM) Ground Validation (GV) sites at Kwajalein, Republic of the Marshall Islands (KWAJ) and Melbourne, Florida (MELB). The seven polar orbiters include three different sensor types: SSM/I (F13, F14 and F15), AMSU-B (N15, N16 and N17), and AMSR-E. The TMI aboard the TRMM satellite flies in a sun asynchronous orbit between 35 S and 35 N latitudes. The rain information from these satellites are combined and used to generate several multi-satellite rain products, namely the Goddard TRMM Multi-satellite Precipitation Analysis (TMPA), NOAA's CPC Morphing Technique (CMORPH) and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN). Instantaneous rain rates derived from each sensor were matched to the GV estimates in time and space at a resolution of 0.25 degrees. The study evaluates the measurement and error characteristics of the various satellite estimates through inter-comparisons with GV radar estimates. The GV rain observations provided an empirical ground-based reference for assessing the relative performance of each sensor and sensor

  6. Bias-adjusted satellite-based rainfall estimates for predicting floods: Narayani Basin

    USGS Publications Warehouse

    Shrestha, M.S.; Artan, G.A.; Bajracharya, S.R.; Gautam, D.K.; Tokar, S.A.

    2011-01-01

    In Nepal, as the spatial distribution of rain gauges is not sufficient to provide detailed perspective on the highly varied spatial nature of rainfall, satellite-based rainfall estimates provides the opportunity for timely estimation. This paper presents the flood prediction of Narayani Basin at the Devghat hydrometric station (32000km2) using bias-adjusted satellite rainfall estimates and the Geospatial Stream Flow Model (GeoSFM), a spatially distributed, physically based hydrologic model. The GeoSFM with gridded gauge observed rainfall inputs using kriging interpolation from 2003 was used for calibration and 2004 for validation to simulate stream flow with both having a Nash Sutcliff Efficiency of above 0.7. With the National Oceanic and Atmospheric Administration Climate Prediction Centre's rainfall estimates (CPC-RFE2.0), using the same calibrated parameters, for 2003 the model performance deteriorated but improved after recalibration with CPC-RFE2.0 indicating the need to recalibrate the model with satellite-based rainfall estimates. Adjusting the CPC-RFE2.0 by a seasonal, monthly and 7-day moving average ratio, improvement in model performance was achieved. Furthermore, a new gauge-satellite merged rainfall estimates obtained from ingestion of local rain gauge data resulted in significant improvement in flood predictability. The results indicate the applicability of satellite-based rainfall estimates in flood prediction with appropriate bias correction. ?? 2011 The Authors. Journal of Flood Risk Management ?? 2011 The Chartered Institution of Water and Environmental Management.

  7. Convolution-based estimation of organ dose in tube current modulated CT

    NASA Astrophysics Data System (ADS)

    Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan

    2016-05-01

    Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The

  8. Development and validation of anthropometric equations to estimate appendicular muscle mass in elderly women.

    PubMed

    Pereira, Piettra Moura Galvão; da Silva, Giselma Alcântara; Santos, Gilberto Moreira; Petroski, Edio Luiz; Geraldes, Amandio Aristides Rihan

    2013-07-02

    This study aimed to examine the cross validity of two anthropometric equations commonly used and propose simple anthropometric equations to estimate appendicular muscle mass (AMM) in elderly women. Among 234 physically active and functionally independent elderly women, 101 (60 to 89 years) were selected through simple drawing to compose the study sample. The paired t test and the Pearson correlation coefficient were used to perform cross-validation and concordance was verified by intraclass correction coefficient (ICC) and by the Bland and Altman technique. To propose predictive models, multiple linear regression analysis, anthropometric measures of body mass (BM), height, girth, skinfolds, body mass index (BMI) were used, and muscle perimeters were included in the analysis as independent variables. Dual-Energy X-ray Absorptiometry (AMMDXA) was used as criterion measurement. The sample power calculations were carried out by Post Hoc Compute Achieved Power. Sample power values from 0.88 to 0.91 were observed. When compared, the two equations tested differed significantly from the AMMDXA (p <0.001 and p = 0.001). Ten population / specific anthropometric equations were developed to estimate AMM, among them, three equations achieved all validation criteria used: AMM (E2) = 4.150 +0.251 [bodymass (BM)] - 0.411 [bodymass index (BMI)] + 0.011 [Right forearm perimeter (PANTd) 2]; AMM (E3) = 4.087 + 0.255 (BM) - 0.371 (BMI) + 0.011 (PANTd) 2 - 0.035 [thigh skinfold (DCCO)]; MMA (E6) = 2.855 + 0.298 (BM) + 0.019 (Age) - 0,082 [hip circumference (PQUAD)] + 0.400 (PANTd) - 0.332 (BMI). The equations estimated the criterion method (p = 0.056 p = 0.158), and explained from 0.69% to 0.74% of variations observed in AMMDXA with low standard errors of the estimate (1.36 to 1.55 kg) and high concordance (ICC between 0,90 and 0.91 and concordance limits from -2,93 to 2,33 kg). The equations tested were not valid for use in physically

  9. Validating alternative methodologies to estimate the regime of temporary rivers when flow data are unavailable.

    PubMed

    Gallart, F; Llorens, P; Latron, J; Cid, N; Rieradevall, M; Prat, N

    2016-09-15

    Hydrological data for assessing the regime of temporary rivers are often non-existent or scarce. The scarcity of flow data makes impossible to characterize the hydrological regime of temporary streams and, in consequence, to select the correct periods and methods to determine their ecological status. This is why the TREHS software is being developed, in the framework of the LIFE Trivers project. It will help managers to implement adequately the European Water Framework Directive in this kind of water body. TREHS, using the methodology described in Gallart et al. (2012), defines six transient 'aquatic states', based on hydrological conditions representing different mesohabitats, for a given reach at a particular moment. Because of its qualitative nature, this approach allows using alternative methodologies to assess the regime of temporary rivers when there are no observed flow data. These methods, based on interviews and high-resolution aerial photographs, were tested for estimating the aquatic regime of temporary rivers. All the gauging stations (13) belonging to the Catalan Internal Catchments (NE Spain) with recurrent zero-flow periods were selected to validate this methodology. On the one hand, non-structured interviews were conducted with inhabitants of villages near the gauging stations. On the other hand, the historical series of available orthophotographs were examined. Flow records measured at the gauging stations were used to validate the alternative methods. Flow permanence in the reaches was estimated reasonably by the interviews and adequately by aerial photographs, when compared with the values estimated using daily flows. The degree of seasonality was assessed only roughly by the interviews. The recurrence of disconnected pools was not detected by flow records but was estimated with some divergences by the two methods. The combination of the two alternative methods allows substituting or complementing flow records, to be updated in the future through

  10. Objectivity and validity of EMG method in estimating anaerobic threshold.

    PubMed

    Kang, S-K; Kim, J; Kwon, M; Eom, H

    2014-08-01

    The purposes of this study were to verify and compare the performances of anaerobic threshold (AT) point estimates among different filtering intervals (9, 15, 20, 25, 30 s) and to investigate the interrelationships of AT point estimates obtained by ventilatory threshold (VT) and muscle fatigue thresholds using electromyographic (EMG) activity during incremental exercise on a cycle ergometer. 69 untrained male university students, yet pursuing regular exercise voluntarily participated in this study. The incremental exercise protocol was applied with a consistent stepwise increase in power output of 20 watts per minute until exhaustion. AT point was also estimated in the same manner using V-slope program with gas exchange parameters. In general, the estimated values of AT point-time computed by EMG method were more consistent across 5 filtering intervals and demonstrated higher correlations among themselves when compared with those values obtained by VT method. The results found in the present study suggest that the EMG signals could be used as an alternative or a new option in estimating AT point. Also the proposed computing procedure implemented in Matlab for the analysis of EMG signals appeared to be valid and reliable as it produced nearly identical values and high correlations with VT estimates. © Georg Thieme Verlag KG Stuttgart · New York.

  11. Validating the use of 137Cs and 210Pbex measurements to estimate rates of soil loss from cultivated land in southern Italy.

    PubMed

    Porto, Paolo; Walling, Des E

    2012-04-01

    Soil erosion represents an important threat to the long-term sustainability of agriculture and forestry in many areas of the world, including southern Italy. Numerous models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution, based on the local topography, hydrometeorology, soil type and land management. However, there remains an important need for empirical measurements to provide a basis for validating and calibrating such models and prediction procedures as well as to support specific investigations and experiments. In this context, erosion plots provide useful information on gross rates of soil loss, but are unable to document the efficiency of the onward transfer of the eroded sediment within a field and towards the stream system, and thus net rates of soil loss from larger areas. The use of environmental radionuclides, particularly caesium-137 ((137)Cs) and excess lead-210 ((210)Pb(ex)), as a means of estimating rates of soil erosion and deposition has attracted increasing attention in recent years and the approach has now been recognised as possessing several important advantages. In order to provide further confirmation of the validity of the estimates of longer-term erosion and soil redistribution rates provided by (137)Cs and (210)Pb(ex) measurements, there is a need for studies aimed explicitly at validating the results obtained. In this context, the authors directed attention to the potential offered by a set of small erosion plots located near Reggio Calabria in southern Italy, for validating estimates of soil loss provided by (137)Cs and (210)Pb(ex) measurements. A preliminary assessment suggested that, notwithstanding the limitations and constraints involved, a worthwhile investigation aimed at validating the use of (137)Cs and (210)Pb(ex) measurements to estimate rates of soil loss from cultivated land could be undertaken. The results demonstrate a close consistency between the measured rates of soil

  12. Screening for cognitive impairment in older individuals. Validation study of a computer-based test.

    PubMed

    Green, R C; Green, J; Harrison, J M; Kutner, M H

    1994-08-01

    This study examined the validity of a computer-based cognitive test that was recently designed to screen the elderly for cognitive impairment. Criterion-related validity was examined by comparing test scores of impaired patients and normal control subjects. Construct-related validity was computed through correlations between computer-based subtests and related conventional neuropsychological subtests. University center for memory disorders. Fifty-two patients with mild cognitive impairment by strict clinical criteria and 50 unimpaired, age- and education-matched control subjects. Control subjects were rigorously screened by neurological, neuropsychological, imaging, and electrophysiological criteria to identify and exclude individuals with occult abnormalities. Using a cut-off total score of 126, this computer-based instrument had a sensitivity of 0.83 and a specificity of 0.96. Using a prevalence estimate of 10%, predictive values, positive and negative, were 0.70 and 0.96, respectively. Computer-based subtests correlated significantly with conventional neuropsychological tests measuring similar cognitive domains. Thirteen (17.8%) of 73 volunteers with normal medical histories were excluded from the control group, with unsuspected abnormalities on standard neuropsychological tests, electroencephalograms, or magnetic resonance imaging scans. Computer-based testing is a valid screening methodology for the detection of mild cognitive impairment in the elderly, although this particular test has important limitations. Broader applications of computer-based testing will require extensive population-based validation. Future studies should recognize that normal control subjects without a history of disease who are typically used in validation studies may have a high incidence of unsuspected abnormalities on neurodiagnostic studies.

  13. Detecting symptom exaggeration in combat veterans using the MMPI-2 symptom validity scales: a mixed group validation.

    PubMed

    Tolin, David F; Steenkamp, Maria M; Marx, Brian P; Litz, Brett T

    2010-12-01

    Although validity scales of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2; J. N. Butcher, W. G. Dahlstrom, J. R. Graham, A. Tellegen, & B. Kaemmer, 1989) have proven useful in the detection of symptom exaggeration in criterion-group validation (CGV) studies, usually comparing instructed feigners with known patient groups, the application of these scales has been problematic when assessing combat veterans undergoing posttraumatic stress disorder (PTSD) examinations. Mixed group validation (MGV) was employed to determine the efficacy of MMPI-2 exaggeration scales in compensation-seeking (CS) and noncompensation-seeking (NCS) veterans. Unlike CGV, MGV allows for a mix of exaggerating and nonexaggerating individuals in each group, does not require that the exaggeration versus nonexaggerating status of any individual be known, and can be adjusted for different base-rate estimates. MMPI-2 responses of 377 male veterans were examined according to CS versus NCS status. MGV was calculated using 4 sets of base-rate estimates drawn from the literature. The validity scales generally performed well (adequate sensitivity, specificity, and efficiency) under most base-rate estimations, and most produced cutoff scores that showed adequate detection of symptom exaggeration, regardless of base-rate assumptions. These results support the use of MMPI-2 validity scales for PTSD evaluations in veteran populations, even under varying base rates of symptom exaggeration.

  14. Validation of an administrative claims-based diagnostic code for pneumonia in a US-based commercially insured COPD population.

    PubMed

    Kern, David M; Davis, Jill; Williams, Setareh A; Tunceli, Ozgur; Wu, Bingcao; Hollis, Sally; Strange, Charlie; Trudo, Frank

    2015-01-01

    To estimate the accuracy of claims-based pneumonia diagnoses in COPD patients using clinical information in medical records as the reference standard. Selecting from a repository containing members' data from 14 regional United States health plans, this validation study identified pneumonia diagnoses within a group of patients initiating treatment for COPD between March 1, 2009 and March 31, 2012. Patients with ≥1 claim for pneumonia (International Classification of Diseases Version 9-CM code 480.xx-486.xx) were identified during the 12 months following treatment initiation. A subset of 800 patients was randomly selected to abstract medical record data (paper based and electronic) for a target sample of 400 patients, to estimate validity within 5% margin of error. Positive predictive value (PPV) was calculated for the claims diagnosis of pneumonia relative to the reference standard, defined as a documented diagnosis in the medical record. A total of 388 records were reviewed; 311 included a documented pneumonia diagnosis, indicating 80.2% (95% confidence interval [CI]: 75.8% to 84.0%) of claims-identified pneumonia diagnoses were validated by the medical charts. Claims-based diagnoses in inpatient or emergency departments (n=185) had greater PPV versus outpatient settings (n=203), 87.6% (95% CI: 81.9%-92.0%) versus 73.4% (95% CI: 66.8%-79.3%), respectively. Claims-diagnoses verified with paper-based charts had similar PPV as the overall study sample, 80.2% (95% CI: 71.1%-87.5%), and higher PPV than those linked to electronic medical records, 73.3% (95% CI: 65.5%-80.2%). Combined paper-based and electronic records had a higher PPV, 87.6% (95% CI: 80.9%-92.6%). Administrative claims data indicating a diagnosis of pneumonia in COPD patients are supported by medical records. The accuracy of a medical record diagnosis of pneumonia remains unknown. With increased use of claims data in medical research, COPD researchers can study pneumonia with confidence that claims

  15. Position Estimation for Switched Reluctance Motor Based on the Single Threshold Angle

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Li, Pang; Yu, Yue

    2017-05-01

    This paper presents a position estimate model of switched reluctance motor based on the single threshold angle. In view of the relationship of between the inductance and rotor position, the position is estimated by comparing the real-time dynamic flux linkage with the threshold angle position flux linkage (7.5° threshold angle, 12/8SRM). The sensorless model is built by Maltab/Simulink, the simulation are implemented under the steady state and transient state different condition, and verified its validity and feasibility of the method..

  16. Development and validation of a new population-based simulation model of osteoarthritis in New Zealand.

    PubMed

    Wilson, R; Abbott, J H

    2018-04-01

    To describe the construction and preliminary validation of a new population-based microsimulation model developed to analyse the health and economic burden and cost-effectiveness of treatments for knee osteoarthritis (OA) in New Zealand (NZ). We developed the New Zealand Management of Osteoarthritis (NZ-MOA) model, a discrete-time state-transition microsimulation model of the natural history of radiographic knee OA. In this article, we report on the model structure, derivation of input data, validation of baseline model parameters against external data sources, and validation of model outputs by comparison of the predicted population health loss with previous estimates. The NZ-MOA model simulates both the structural progression of radiographic knee OA and the stochastic development of multiple disease symptoms. Input parameters were sourced from NZ population-based data where possible, and from international sources where NZ-specific data were not available. The predicted distributions of structural OA severity and health utility detriments associated with OA were externally validated against other sources of evidence, and uncertainty resulting from key input parameters was quantified. The resulting lifetime and current population health-loss burden was consistent with estimates of previous studies. The new NZ-MOA model provides reliable estimates of the health loss associated with knee OA in the NZ population. The model structure is suitable for analysis of the effects of a range of potential treatments, and will be used in future work to evaluate the cost-effectiveness of recommended interventions within the NZ healthcare system. Copyright © 2018 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  17. A new age-based formula for estimating weight of Korean children.

    PubMed

    Park, Jungho; Kwak, Young Ho; Kim, Do Kyun; Jung, Jae Yun; Lee, Jin Hee; Jang, Hye Young; Kim, Hahn Bom; Hong, Ki Jeong

    2012-09-01

    The objective of this study was to develop and validate a new age-based formula for estimating body weights of Korean children. We obtained body weight and age data from a survey conducted in 2005 by the Korean Pediatric Society that was performed to establish normative values for Korean children. Children aged 0-14 were enrolled, and they were divided into three groups according to age: infants (<12 months), preschool-aged (1-4 years) and school-aged children (5-14 years). Seventy-five percent of all subjects were randomly selected to make a derivation set. Regression analysis was performed in order to produce equations that predict the weight from the age for each group. The linear equations derived from this analysis were simplified to create a weight estimating formula for Korean children. This formula was then validated using the remaining 25% of the study subjects with mean percentage error and absolute error. To determine whether a new formula accurately predicts actual weights of Korean children, we also compared this new formula to other weight estimation methods (APLS, Shann formula, Leffler formula, Nelson formula and Broselow tape). A total of 124,095 children's data were enrolled, and 19,854 (16.0%), 40,612 (32.7%) and 63,629 (51.3%) were classified as infants, preschool-aged and school-aged groups, respectively. Three equations, (age in months+9)/2, 2×(age in years)+9 and 4×(age in years)-1 were derived for infants, pre-school and school-aged groups, respectively. When these equations were applied to the validation set, the actual average weight of those children was 0.4kg heavier than our estimated weight (95% CI=0.37-0.43, p<0.001). The mean percentage error of our model (+0.9%) was lower than APLS (-11.5%), Shann formula (-8.6%), Leffler formula (-1.7%), Nelson formula (-10.0%), Best Guess formula (+5.0%) and Broselow tape (-4.8%) for all age groups. We developed and validated a simple formula to estimate body weight from the age of Korean

  18. Validating GPM-based Multi-satellite IMERG Products Over South Korea

    NASA Astrophysics Data System (ADS)

    Wang, J.; Petersen, W. A.; Wolff, D. B.; Ryu, G. H.

    2017-12-01

    Accurate precipitation estimates derived from space-borne satellite measurements are critical for a wide variety of applications such as water budget studies, and prevention or mitigation of natural hazards caused by extreme precipitation events. This study validates the near-real-time Early Run, Late Run and the research-quality Final Run Integrated Multi-Satellite Retrievals for GPM (IMERG) using Korean Quantitative Precipitation Estimation (QPE). The Korean QPE data are at a 1-hour temporal resolution and 1-km by 1-km spatial resolution, and were developed by Korea Meteorological Administration (KMA) from a Real-time ADjusted Radar-AWS (Automatic Weather Station) Rainrate (RAD-RAR) system utilizing eleven radars over the Republic of Korea. The validation is conducted by comparing Version-04A IMERG (Early, Late and Final Runs) with Korean QPE over the area (124.5E-130.5E, 32.5N-39N) at various spatial and temporal scales during March 2014 through November 2016. The comparisons demonstrate the reasonably good ability of Version-04A IMERG products in estimating precipitation over South Korea's complex topography that consists mainly of hills and mountains, as well as large coastal plains. Based on this data, the Early Run, Late Run and Final Run IMERG precipitation estimates higher than 0.1mm h-1 are about 20.1%, 7.5% and 6.1% higher than Korean QPE at 0.1o and 1-hour resolutions. Detailed comparison results are available at https://wallops-prf.gsfc.nasa.gov/KoreanQPE.V04/index.html

  19. Validation of walk score for estimating neighborhood walkability: an analysis of four US metropolitan areas.

    PubMed

    Duncan, Dustin T; Aldstadt, Jared; Whalen, John; Melly, Steven J; Gortmaker, Steven L

    2011-11-01

    Neighborhood walkability can influence physical activity. We evaluated the validity of Walk Score(®) for assessing neighborhood walkability based on GIS (objective) indicators of neighborhood walkability with addresses from four US metropolitan areas with several street network buffer distances (i.e., 400-, 800-, and 1,600-meters). Address data come from the YMCA-Harvard After School Food and Fitness Project, an obesity prevention intervention involving children aged 5-11 years and their families participating in YMCA-administered, after-school programs located in four geographically diverse metropolitan areas in the US (n = 733). GIS data were used to measure multiple objective indicators of neighborhood walkability. Walk Scores were also obtained for the participant's residential addresses. Spearman correlations between Walk Scores and the GIS neighborhood walkability indicators were calculated as well as Spearman correlations accounting for spatial autocorrelation. There were many significant moderate correlations between Walk Scores and the GIS neighborhood walkability indicators such as density of retail destinations and intersection density (p < 0.05). The magnitude varied by the GIS indicator of neighborhood walkability. Correlations generally became stronger with a larger spatial scale, and there were some geographic differences. Walk Score(®) is free and publicly available for public health researchers and practitioners. Results from our study suggest that Walk Score(®) is a valid measure of estimating certain aspects of neighborhood walkability, particularly at the 1600-meter buffer. As such, our study confirms and extends the generalizability of previous findings demonstrating that Walk Score is a valid measure of estimating neighborhood walkability in multiple geographic locations and at multiple spatial scales.

  20. Validation of spatiodemographic estimates produced through data fusion of small area census records and household microdata

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rose, Amy N.; Nagle, Nicholas N.

    Techniques such as Iterative Proportional Fitting have been previously suggested as a means to generate new data with the demographic granularity of individual surveys and the spatial granularity of small area tabulations of censuses and surveys. This article explores internal and external validation approaches for synthetic, small area, household- and individual-level microdata using a case study for Bangladesh. Using data from the Bangladesh Census 2011 and the Demographic and Health Survey, we produce estimates of infant mortality rate and other household attributes for small areas using a variation of an iterative proportional fitting method called P-MEDM. We conduct an internalmore » validation to determine: whether the model accurately recreates the spatial variation of the input data, how each of the variables performed overall, and how the estimates compare to the published population totals. We conduct an external validation by comparing the estimates with indicators from the 2009 Multiple Indicator Cluster Survey (MICS) for Bangladesh to benchmark how well the estimates compared to a known dataset which was not used in the original model. The results indicate that the estimation process is viable for regions that are better represented in the microdata sample, but also revealed the possibility of strong overfitting in sparsely sampled sub-populations.« less

  1. Validation of spatiodemographic estimates produced through data fusion of small area census records and household microdata

    DOE PAGES

    Rose, Amy N.; Nagle, Nicholas N.

    2016-08-01

    Techniques such as Iterative Proportional Fitting have been previously suggested as a means to generate new data with the demographic granularity of individual surveys and the spatial granularity of small area tabulations of censuses and surveys. This article explores internal and external validation approaches for synthetic, small area, household- and individual-level microdata using a case study for Bangladesh. Using data from the Bangladesh Census 2011 and the Demographic and Health Survey, we produce estimates of infant mortality rate and other household attributes for small areas using a variation of an iterative proportional fitting method called P-MEDM. We conduct an internalmore » validation to determine: whether the model accurately recreates the spatial variation of the input data, how each of the variables performed overall, and how the estimates compare to the published population totals. We conduct an external validation by comparing the estimates with indicators from the 2009 Multiple Indicator Cluster Survey (MICS) for Bangladesh to benchmark how well the estimates compared to a known dataset which was not used in the original model. The results indicate that the estimation process is viable for regions that are better represented in the microdata sample, but also revealed the possibility of strong overfitting in sparsely sampled sub-populations.« less

  2. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    NASA Astrophysics Data System (ADS)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  3. Variables influencing wearable sensor outcome estimates in individuals with stroke and incomplete spinal cord injury: a pilot investigation validating two research grade sensors.

    PubMed

    Jayaraman, Chandrasekaran; Mummidisetty, Chaithanya Krishna; Mannix-Slobig, Alannah; McGee Koch, Lori; Jayaraman, Arun

    2018-03-13

    Monitoring physical activity and leveraging wearable sensor technologies to facilitate active living in individuals with neurological impairment has been shown to yield benefits in terms of health and quality of living. In this context, accurate measurement of physical activity estimates from these sensors are vital. However, wearable sensor manufacturers generally only provide standard proprietary algorithms based off of healthy individuals to estimate physical activity metrics which may lead to inaccurate estimates in population with neurological impairment like stroke and incomplete spinal cord injury (iSCI). The main objective of this cross-sectional investigation was to evaluate the validity of physical activity estimates provided by standard proprietary algorithms for individuals with stroke and iSCI. Two research grade wearable sensors used in clinical settings were chosen and the outcome metrics estimated using standard proprietary algorithms were validated against designated golden standard measures (Cosmed K4B2 for energy expenditure and metabolic equivalent and manual tallying for step counts). The influence of sensor location, sensor type and activity characteristics were also studied. 28 participants (Healthy (n = 10); incomplete SCI (n = 8); stroke (n = 10)) performed a spectrum of activities in a laboratory setting using two wearable sensors (ActiGraph and Metria-IH1) at different body locations. Manufacturer provided standard proprietary algorithms estimated the step count, energy expenditure (EE) and metabolic equivalent (MET). These estimates were compared with the estimates from gold standard measures. For verifying validity, a series of Kruskal Wallis ANOVA tests (Games-Howell multiple comparison for post-hoc analyses) were conducted to compare the mean rank and absolute agreement of outcome metrics estimated by each of the devices in comparison with the designated gold standard measurements. The sensor type, sensor location

  4. Estimation of low back moments from video analysis: a validation study.

    PubMed

    Coenen, Pieter; Kingma, Idsart; Boot, Cécile R L; Faber, Gert S; Xu, Xu; Bongers, Paulien M; van Dieën, Jaap H

    2011-09-02

    This study aimed to develop, compare and validate two versions of a video analysis method for assessment of low back moments during occupational lifting tasks since for epidemiological studies and ergonomic practice relatively cheap and easily applicable methods to assess low back loads are needed. Ten healthy subjects participated in a protocol comprising 12 lifting conditions. Low back moments were assessed using two variants of a video analysis method and a lab-based reference method. Repeated measures ANOVAs showed no overall differences in peak moments between the two versions of the video analysis method and the reference method. However, two conditions showed a minor overestimation of one of the video analysis method moments. Standard deviations were considerable suggesting that errors in the video analysis were random. Furthermore, there was a small underestimation of dynamic components and overestimation of the static components of the moments. Intraclass correlations coefficients for peak moments showed high correspondence (>0.85) of the video analyses with the reference method. It is concluded that, when a sufficient number of measurements can be taken, the video analysis method for assessment of low back loads during lifting tasks provides valid estimates of low back moments in ergonomic practice and epidemiological studies for lifts up to a moderate level of asymmetry. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Are cannabis prevalence estimates comparable across countries and regions? A cross-cultural validation using search engine query data.

    PubMed

    Steppan, Martin; Kraus, Ludwig; Piontek, Daniela; Siciliano, Valeria

    2013-01-01

    Prevalence estimation of cannabis use is usually based on self-report data. Although there is evidence on the reliability of this data source, its cross-cultural validity is still a major concern. External objective criteria are needed for this purpose. In this study, cannabis-related search engine query data are used as an external criterion. Data on cannabis use were taken from the 2007 European School Survey Project on Alcohol and Other Drugs (ESPAD). Provincial data came from three Italian nation-wide studies using the same methodology (2006-2008; ESPAD-Italia). Information on cannabis-related search engine query data was based on Google search volume indices (GSI). (1) Reliability analysis was conducted for GSI. (2) Latent measurement models of "true" cannabis prevalence were tested using perceived availability, web-based cannabis searches and self-reported prevalence as indicators. (3) Structure models were set up to test the influences of response tendencies and geographical position (latitude, longitude). In order to test the stability of the models, analyses were conducted on country level (Europe, US) and on provincial level in Italy. Cannabis-related GSI were found to be highly reliable and constant over time. The overall measurement model was highly significant in both data sets. On country level, no significant effects of response bias indicators and geographical position on perceived availability, web-based cannabis searches and self-reported prevalence were found. On provincial level, latitude had a significant positive effect on availability indicating that perceived availability of cannabis in northern Italy was higher than expected from the other indicators. Although GSI showed weaker associations with cannabis use than perceived availability, the findings underline the external validity and usefulness of search engine query data as external criteria. The findings suggest an acceptable relative comparability of national (provincial) prevalence

  6. Validity evidence based on test content.

    PubMed

    Sireci, Stephen; Faulkner-Bond, Molly

    2014-01-01

    Validity evidence based on test content is one of the five forms of validity evidence stipulated in the Standards for Educational and Psychological Testing developed by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. In this paper, we describe the logic and theory underlying such evidence and describe traditional and modern methods for gathering and analyzing content validity data. A comprehensive review of the literature and of the aforementioned Standards is presented. For educational tests and other assessments targeting knowledge and skill possessed by examinees, validity evidence based on test content is necessary for building a validity argument to support the use of a test for a particular purpose. By following the methods described in this article, practitioners have a wide arsenal of tools available for determining how well the content of an assessment is congruent with and appropriate for the specific testing purposes.

  7. Development and validation of anthropometric equations to estimate appendicular muscle mass in elderly women

    PubMed Central

    2013-01-01

    Objective This study aimed to examine the cross validity of two anthropometric equations commonly used and propose simple anthropometric equations to estimate appendicular muscle mass (AMM) in elderly women. Methods Among 234 physically active and functionally independent elderly women, 101 (60 to 89 years) were selected through simple drawing to compose the study sample. The paired t test and the Pearson correlation coefficient were used to perform cross-validation and concordance was verified by intraclass correction coefficient (ICC) and by the Bland and Altman technique. To propose predictive models, multiple linear regression analysis, anthropometric measures of body mass (BM), height, girth, skinfolds, body mass index (BMI) were used, and muscle perimeters were included in the analysis as independent variables. Dual-Energy X-ray Absorptiometry (AMMDXA) was used as criterion measurement. The sample power calculations were carried out by Post Hoc Compute Achieved Power. Sample power values from 0.88 to 0.91 were observed. Results When compared, the two equations tested differed significantly from the AMMDXA (p <0.001 and p = 0.001). Ten population / specific anthropometric equations were developed to estimate AMM, among them, three equations achieved all validation criteria used: AMM (E2) = 4.150 +0.251 [bodymass (BM)] - 0.411 [bodymass index (BMI)] + 0.011 [Right forearm perimeter (PANTd) 2]; AMM (E3) = 4.087 + 0.255 (BM) - 0.371 (BMI) + 0.011 (PANTd) 2 - 0.035 [thigh skinfold (DCCO)]; MMA (E6) = 2.855 + 0.298 (BM) + 0.019 (Age) - 0,082 [hip circumference (PQUAD)] + 0.400 (PANTd) - 0.332 (BMI). The equations estimated the criterion method (p = 0.056 p = 0.158), and explained from 0.69% to 0.74% of variations observed in AMMDXA with low standard errors of the estimate (1.36 to 1.55 kg) and high concordance (ICC between 0,90 and 0.91 and concordance limits from -2,93 to 2,33 kg). Conclusion The equations tested

  8. On the multiple imputation variance estimator for control-based and delta-adjusted pattern mixture models.

    PubMed

    Tang, Yongqiang

    2017-12-01

    Control-based pattern mixture models (PMM) and delta-adjusted PMMs are commonly used as sensitivity analyses in clinical trials with non-ignorable dropout. These PMMs assume that the statistical behavior of outcomes varies by pattern in the experimental arm in the imputation procedure, but the imputed data are typically analyzed by a standard method such as the primary analysis model. In the multiple imputation (MI) inference, Rubin's variance estimator is generally biased when the imputation and analysis models are uncongenial. One objective of the article is to quantify the bias of Rubin's variance estimator in the control-based and delta-adjusted PMMs for longitudinal continuous outcomes. These PMMs assume the same observed data distribution as the mixed effects model for repeated measures (MMRM). We derive analytic expressions for the MI treatment effect estimator and the associated Rubin's variance in these PMMs and MMRM as functions of the maximum likelihood estimator from the MMRM analysis and the observed proportion of subjects in each dropout pattern when the number of imputations is infinite. The asymptotic bias is generally small or negligible in the delta-adjusted PMM, but can be sizable in the control-based PMM. This indicates that the inference based on Rubin's rule is approximately valid in the delta-adjusted PMM. A simple variance estimator is proposed to ensure asymptotically valid MI inferences in these PMMs, and compared with the bootstrap variance. The proposed method is illustrated by the analysis of an antidepressant trial, and its performance is further evaluated via a simulation study. © 2017, The International Biometric Society.

  9. Optimal combining of ground-based sensors for the purpose of validating satellite-based rainfall estimates

    NASA Technical Reports Server (NTRS)

    Krajewski, Witold F.; Rexroth, David T.; Kiriaki, Kiriakie

    1991-01-01

    Two problems related to radar rainfall estimation are described. The first part is a description of a preliminary data analysis for the purpose of statistical estimation of rainfall from multiple (radar and raingage) sensors. Raingage, radar, and joint radar-raingage estimation is described, and some results are given. Statistical parameters of rainfall spatial dependence are calculated and discussed in the context of optimal estimation. Quality control of radar data is also described. The second part describes radar scattering by ellipsoidal raindrops. An analytical solution is derived for the Rayleigh scattering regime. Single and volume scattering are presented. Comparison calculations with the known results for spheres and oblate spheroids are shown.

  10. Validity and Feasibility of a Digital Diet Estimation Method for Use with Preschool Children: A Pilot Study

    ERIC Educational Resources Information Center

    Nicklas, Theresa A.; O'Neil, Carol E.; Stuff, Janice; Goodell, Lora Suzanne; Liu, Yan; Martin, Corby K.

    2012-01-01

    Objective: The goal of the study was to assess the validity and feasibility of a digital diet estimation method for use with preschool children in "Head Start." Methods: Preschool children and their caregivers participated in validation (n = 22) and feasibility (n = 24) pilot studies. Validity was determined in the metabolic research unit using…

  11. Validation of NH3 satellite observations by ground-based FTIR measurements

    NASA Astrophysics Data System (ADS)

    Dammers, Enrico; Palm, Mathias; Van Damme, Martin; Shephard, Mark; Cady-Pereira, Karen; Capps, Shannon; Clarisse, Lieven; Coheur, Pierre; Erisman, Jan Willem

    2016-04-01

    Global emissions of reactive nitrogen have been increasing to an unprecedented level due to human activities and are estimated to be a factor four larger than pre-industrial levels. Concentration levels of NOx are declining, but ammonia (NH3) levels are increasing around the globe. While NH3 at its current concentrations poses significant threats to the environment and human health, relatively little is known about the total budget and global distribution. Surface observations are sparse and mainly available for north-western Europe, the United States and China and are limited by the high costs and poor temporal and spatial resolution. Since the lifetime of atmospheric NH3 is short, on the order of hours to a few days, due to efficient deposition and fast conversion to particulate matter, the existing surface measurements are not sufficient to estimate global concentrations. Advanced space-based IR-sounders such as the Tropospheric Emission Spectrometer (TES), the Infrared Atmospheric Sounding Interferometer (IASI), and the Cross-track Infrared Sounder (CrIS) enable global observations of atmospheric NH3 that help overcome some of the limitations of surface observations. However, the satellite NH3 retrievals are complex requiring extensive validation. Presently there have only been a few dedicated satellite NH3 validation campaigns performed with limited spatial, vertical or temporal coverage. Recently a retrieval methodology was developed for ground-based Fourier Transform Infrared Spectroscopy (FTIR) instruments to obtain vertical concentration profiles of NH3. Here we show the applicability of retrieved columns from nine globally distributed stations with a range of NH3 pollution levels to validate satellite NH3 products.

  12. Comparing Mapped Plot Estimators

    Treesearch

    Paul C. Van Deusen

    2006-01-01

    Two alternative derivations of estimators for mean and variance from mapped plots are compared by considering the models that support the estimators and by simulation. It turns out that both models lead to the same estimator for the mean but lead to very different variance estimators. The variance estimators based on the least valid model assumptions are shown to...

  13. Is Earth-based scaling a valid procedure for calculating heat flows for Mars?

    NASA Astrophysics Data System (ADS)

    Ruiz, Javier; Williams, Jean-Pierre; Dohm, James M.; Fernández, Carlos; López, Valle

    2013-09-01

    Heat flow is a very important parameter for constraining the thermal evolution of a planetary body. Several procedures for calculating heat flows for Mars from geophysical or geological proxies have been used, which are valid for the time when the structures used as indicators were formed. The more common procedures are based on estimates of lithospheric strength (the effective elastic thickness of the lithosphere or the depth to the brittle-ductile transition). On the other hand, several works by Kargel and co-workers have estimated martian heat flows from scaling the present-day terrestrial heat flow to Mars, but the so-obtained values are much higher than those deduced from lithospheric strength. In order to explain the discrepancy, a recent paper by Rodriguez et al. (Rodriguez, J.A.P., Kargel, J.S., Tanaka, K.L., Crown, D.A., Berman, D.C., Fairén, A.G., Baker, V.R., Furfaro, R., Candelaria, P., Sasaki, S. [2011]. Icarus 213, 150-194) criticized the heat flow calculations for ancient Mars presented by Ruiz et al. (Ruiz, J., Williams, J.-P., Dohm, J.M., Fernández, C., López, V. [2009]. Icarus 207, 631-637) and other studies calculating ancient martian heat flows from lithospheric strength estimates, and casted doubts on the validity of the results obtained by these works. Here however we demonstrate that the discrepancy is due to computational and conceptual errors made by Kargel and co-workers, and we conclude that the scaling from terrestrial heat flow values is not a valid procedure for estimating reliable heat flows for Mars.

  14. Estimation of median growth curves for children up two years old based on biresponse local linear estimator

    NASA Astrophysics Data System (ADS)

    Chamidah, Nur; Rifada, Marisa

    2016-03-01

    There is significant of the coeficient correlation between weight and height of the children. Therefore, the simultaneous model estimation is better than partial single response approach. In this study we investigate the pattern of sex difference in growth curve of children from birth up to two years of age in Surabaya, Indonesia based on biresponse model. The data was collected in a longitudinal representative sample of the Surabaya population of healthy children that consists of two response variables i.e. weight (kg) and height (cm). While a predictor variable is age (month). Based on generalized cross validation criterion, the modeling result based on biresponse model by using local linear estimator for boy and girl growth curve gives optimal bandwidth i.e 1.41 and 1.56 and the determination coefficient (R2) i.e. 99.99% and 99.98%,.respectively. Both boy and girl curves satisfy the goodness of fit criterion i.e..the determination coefficient tends to one. Also, there is difference pattern of growth curve between boy and girl. The boy median growth curves is higher than those of girl curve.

  15. An Evaluation of Available Models for Estimating the Reliability and Validity of Criterion Referenced Measures.

    ERIC Educational Resources Information Center

    Oakland, Thomas

    New strategies for evaluation criterion referenced measures (CRM) are discussed. These strategies examine the following issues: (1) the use of normed referenced measures (NRM) as CRM and then estimating the reliability and validity of such measures in terms of variance from an arbitrarily specified criterion score, (2) estimation of the…

  16. Batch Effect Confounding Leads to Strong Bias in Performance Estimates Obtained by Cross-Validation

    PubMed Central

    Delorenzi, Mauro

    2014-01-01

    Background With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences (“batch effects”) as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. Focus The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. Data We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., ‘control’) or group 2 (e.g., ‘treated’). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. Methods We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data. PMID:24967636

  17. Permeability Estimation of Rock Reservoir Based on PCA and Elman Neural Networks

    NASA Astrophysics Data System (ADS)

    Shi, Ying; Jian, Shaoyong

    2018-03-01

    an intelligent method which based on fuzzy neural networks with PCA algorithm, is proposed to estimate the permeability of rock reservoir. First, the dimensionality reduction process is utilized for these parameters by principal component analysis method. Further, the mapping relationship between rock slice characteristic parameters and permeability had been found through fuzzy neural networks. The estimation validity and reliability for this method were tested with practical data from Yan’an region in Ordos Basin. The result showed that the average relative errors of permeability estimation for this method is 6.25%, and this method had the better convergence speed and more accuracy than other. Therefore, by using the cheap rock slice related information, the permeability of rock reservoir can be estimated efficiently and accurately, and it is of high reliability, practicability and application prospect.

  18. Validation of an administrative claims-based diagnostic code for pneumonia in a US-based commercially insured COPD population

    PubMed Central

    Kern, David M; Davis, Jill; Williams, Setareh A; Tunceli, Ozgur; Wu, Bingcao; Hollis, Sally; Strange, Charlie; Trudo, Frank

    2015-01-01

    Objective To estimate the accuracy of claims-based pneumonia diagnoses in COPD patients using clinical information in medical records as the reference standard. Methods Selecting from a repository containing members’ data from 14 regional United States health plans, this validation study identified pneumonia diagnoses within a group of patients initiating treatment for COPD between March 1, 2009 and March 31, 2012. Patients with ≥1 claim for pneumonia (International Classification of Diseases Version 9-CM code 480.xx–486.xx) were identified during the 12 months following treatment initiation. A subset of 800 patients was randomly selected to abstract medical record data (paper based and electronic) for a target sample of 400 patients, to estimate validity within 5% margin of error. Positive predictive value (PPV) was calculated for the claims diagnosis of pneumonia relative to the reference standard, defined as a documented diagnosis in the medical record. Results A total of 388 records were reviewed; 311 included a documented pneumonia diagnosis, indicating 80.2% (95% confidence interval [CI]: 75.8% to 84.0%) of claims-identified pneumonia diagnoses were validated by the medical charts. Claims-based diagnoses in inpatient or emergency departments (n=185) had greater PPV versus outpatient settings (n=203), 87.6% (95% CI: 81.9%–92.0%) versus 73.4% (95% CI: 66.8%–79.3%), respectively. Claims-diagnoses verified with paper-based charts had similar PPV as the overall study sample, 80.2% (95% CI: 71.1%–87.5%), and higher PPV than those linked to electronic medical records, 73.3% (95% CI: 65.5%–80.2%). Combined paper-based and electronic records had a higher PPV, 87.6% (95% CI: 80.9%–92.6%). Conclusion Administrative claims data indicating a diagnosis of pneumonia in COPD patients are supported by medical records. The accuracy of a medical record diagnosis of pneumonia remains unknown. With increased use of claims data in medical research, COPD researchers

  19. A fuel-based approach to estimating motor vehicle exhaust emissions

    NASA Astrophysics Data System (ADS)

    Singer, Brett Craig

    in California appear to understate total exhaust CO and VOC emissions, while overstating the importance of cold start emissions. The fuel-based approach yields robust, independent, and accurate estimates of on-road vehicle emissions. Fuel-based estimates should be used to validate or adjust official vehicle emission inventories before society embarks on new, more costly air pollution control programs.

  20. Validity and reliability of dental age estimation of teeth root translucency based on digital luminance determination.

    PubMed

    Ramsthaler, Frank; Kettner, Mattias; Verhoff, Marcel A

    2014-01-01

    In forensic anthropological casework, estimating age-at-death is key to profiling unknown skeletal remains. The aim of this study was to examine the reliability of a new, simple, fast, and inexpensive digital odontological method for age-at-death estimation. The method is based on the original Lamendin method, which is a widely used technique in the repertoire of odontological aging methods in forensic anthropology. We examined 129 single root teeth employing a digital camera and imaging software for the measurement of the luminance of the teeth's translucent root zone. Variability in luminance detection was evaluated using statistical technical error of measurement analysis. The method revealed stable values largely unrelated to observer experience, whereas requisite formulas proved to be camera-specific and should therefore be generated for an individual recording setting based on samples of known chronological age. Multiple regression analysis showed a highly significant influence of the coefficients of the variables "arithmetic mean" and "standard deviation" of luminance for the regression formula. For the use of this primer multivariate equation for age-at-death estimation in casework, a standard error of the estimate of 6.51 years was calculated. Step-by-step reduction of the number of embedded variables to linear regression analysis employing the best contributor "arithmetic mean" of luminance yielded a regression equation with a standard error of 6.72 years (p < 0.001). The results of this study not only support the premise of root translucency as an age-related phenomenon, but also demonstrate that translucency reflects a number of other influencing factors in addition to age. This new digital measuring technique of the zone of dental root luminance can broaden the array of methods available for estimating chronological age, and furthermore facilitate measurement and age classification due to its low dependence on observer experience.

  1. Monte-Carlo-based phase retardation estimator for polarization sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Duan, Lian; Makita, Shuichi; Yamanari, Masahiro; Lim, Yiheng; Yasuno, Yoshiaki

    2011-08-01

    A Monte-Carlo-based phase retardation estimator is developed to correct the systematic error in phase retardation measurement by polarization sensitive optical coherence tomography (PS-OCT). Recent research has revealed that the phase retardation measured by PS-OCT has a distribution that is neither symmetric nor centered at the true value. Hence, a standard mean estimator gives us erroneous estimations of phase retardation, and it degrades the performance of PS-OCT for quantitative assessment. In this paper, the noise property in phase retardation is investigated in detail by Monte-Carlo simulation and experiments. A distribution transform function is designed to eliminate the systematic error by using the result of the Monte-Carlo simulation. This distribution transformation is followed by a mean estimator. This process provides a significantly better estimation of phase retardation than a standard mean estimator. This method is validated both by numerical simulations and experiments. The application of this method to in vitro and in vivo biological samples is also demonstrated.

  2. Development and validation of two equations based on anthropometry, estimating body fat for the Greek adult population.

    PubMed

    Kanellakis, Spyridon; Skoufas, Efstathios; Khudokonenko, Vladlena; Apostolidou, Eftychia; Gerakiti, Loukia; Andrioti, Maria-Chrysi; Bountouvi, Evangelia; Manios, Yannis

    2017-02-01

    To validate anthropometric equations in the current literature predicting body fat percentage (%BF) in the Greek population, to develop and validate two anthropometric equations estimating %BF, and to compare them with the retrieved equations. Anthropometric data from 642 Greek adults were incorporated. Dual-energy X-ray absorptiometry was used as reference method. The comparison with other equations was made using Bland-Altman analysis, intraclass correlation coefficient, and Lin's concordance correlation coefficient. Nine of the thirty-one retrieved equations had no statistically significant bias. However, all of them had wide limits of agreement (±8.3 to ±16%BF). The equations accrued were: BF% = -0.615-10.948 × sex + 0.321 × waist circumference + 0.502 × hips circumference-0.39 × forearm circumference - 19.768 × height (m) and BF% = -27.787-5.515 × sex-8.419 × height + 0.145 × waist circumference + 0.270 × hips circumference + 7.509 × log of thigh skinfold + 20.090 × log of sum of skinfolds (bicep + tricep + suprailiac + subscapular)-0.445 × forearm circumference. Bland-Altman's reliability analysis showed no significant bias of -0.058 and -0.148%BF and limits of agreement ±8.100 and ±6.056%BF; the intraclass correlation coefficient was 0.955 and 0.976; and Lin's concordance correlation coefficient was 0.914 and 0.951, respectively. Literature equations performed moderately on this study's population. Therefore, two equations were designed and validated. The first one was simple and easily applicable, with measures obtained from a measuring tape, and the second one more complicated yet more accurate and reliable. Both were found to be reliable for the assessment of body composition in the Greek population. © 2017 The Obesity Society.

  3. Deflection-Based Structural Loads Estimation From the Active Aeroelastic Wing F/A-18 Aircraft

    NASA Technical Reports Server (NTRS)

    Lizotte, Andrew M.; Lokos, William A.

    2005-01-01

    Traditional techniques in structural load measurement entail the correlation of a known load with strain-gage output from the individual components of a structure or machine. The use of strain gages has proved successful and is considered the standard approach for load measurement. However, remotely measuring aerodynamic loads using deflection measurement systems to determine aeroelastic deformation as a substitute to strain gages may yield lower testing costs while improving aircraft performance through reduced instrumentation weight. This technique was examined using a reliable strain and structural deformation measurement system. The objective of this study was to explore the utility of a deflection-based load estimation, using the active aeroelastic wing F/A-18 aircraft. Calibration data from ground tests performed on the aircraft were used to derive left wing-root and wing-fold bending-moment and torque load equations based on strain gages, however, for this study, point deflections were used to derive deflection-based load equations. Comparisons between the strain-gage and deflection-based methods are presented. Flight data from the phase-1 active aeroelastic wing flight program were used to validate the deflection-based load estimation method. Flight validation revealed a strong bending-moment correlation and slightly weaker torque correlation. Development of current techniques, and future studies are discussed.

  4. Criterion validity of the visual estimation method for determining patients' meal intake in a community hospital.

    PubMed

    Kawasaki, Yui; Sakai, Masashi; Nishimura, Kazuhiro; Fujiwara, Keiko; Fujisaki, Kahori; Shimpo, Misa; Akamatsu, Rie

    2016-12-01

    The accuracy of the visual estimation method is unknown, even though it is commonly used in hospitals to measure the dietary intake of patients. We aimed to compare the difference in the validity of visual estimation according to the raters' job categories and tray divisions, and to demonstrate associations between meal characteristics and validity of visual estimation in a usual clinical setting in a community hospital. We collected patients' dietary intake data in usual clinical settings for each tray in 3 ways: visual estimation by nursing assistants, visual estimation by dietitians, and weighing by researchers (reference method). Dietitians estimated the dietary intake using 2 divisions, namely, whole tray and food items. Then we compared the weights and visual estimation data to evaluate the validity of the visual estimation method. Mean nutrient consumption of target trays was significantly different when using the visual estimation of target trays than when using the weighed method (visual estimation by nursing assistants [589 ± 168 kcal, 24.3 ± 7.0 g/tray, p < 0.01], dietitians' whole trays [561 ± 171 kcal, 23.0 ± 6.9 g/tray, p < 0.05], food items [562 ± 171 kcal/tray, p < 0.05], and dietitians' food items [23.4 ± 7.3 g/tray, p = 0.63]). Spearman's correlations for both methods were very high for energy (ρ = 0.91-0.98, p < 0.01) and protein intakes (ρ = 0.88-0.96, p < 0.01), respectively. The limits of agreement in the Bland-Altman plot for both dietary intake categories were -121 kcal to 147 kcal/tray and -6.4 g to 7.0 g/tray (nursing assistants, whole division), -122 kcal-106 kcal/tray and -6.7 g to 5.5 g/tray (dietitians, whole divisions), and -82 kcal to 66 kcal/tray and -4.3 g to 3.9 g/tray (dietitians, food items divisions). High intake rate of grains was significantly associated with decreased odds of a difference between two methods based on the nursing assistant's whole tray evaluation

  5. Validation of Ocean Color Remote Sensing Reflectance Using Autonomous Floats

    NASA Technical Reports Server (NTRS)

    Gerbi, Gregory P.; Boss, Emanuel; Werdell, P. Jeremy; Proctor, Christopher W.; Haentjens, Nils; Lewis, Marlon R.; Brown, Keith; Sorrentino, Diego; Zaneveld, J. Ronald V.; Barnard, Andrew H.; hide

    2016-01-01

    The use of autonomous proling oats for observational estimates of radiometric quantities in the ocean is explored, and the use of this platform for validation of satellite-based estimates of remote sensing reectance in the ocean is examined. This effort includes comparing quantities estimated from oat and satellite data at nominal wavelengths of 412, 443, 488, and 555 nm, and examining sources and magnitudes of uncertainty in the oat estimates. This study had 65 occurrences of coincident high-quality observations from oats and MODIS Aqua and 15 occurrences of coincident high-quality observations oats and Visible Infrared Imaging Radi-ometer Suite (VIIRS). The oat estimates of remote sensing reectance are similar to the satellite estimates, with disagreement of a few percent in most wavelengths. The variability of the oatsatellite comparisons is similar to the variability of in situsatellite comparisons using a validation dataset from the Marine Optical Buoy (MOBY). This, combined with the agreement of oat-based and satellite-based quantities, suggests that oats are likely a good platform for validation of satellite-based estimates of remote sensing reectance.

  6. [A method to estimate the short-term fractal dimension of heart rate variability based on wavelet transform].

    PubMed

    Zhonggang, Liang; Hong, Yan

    2006-10-01

    A new method of calculating fractal dimension of short-term heart rate variability signals is presented. The method is based on wavelet transform and filter banks. The implementation of the method is: First of all we pick-up the fractal component from HRV signals using wavelet transform. Next, we estimate the power spectrum distribution of fractal component using auto-regressive model, and we estimate parameter 7 using the least square method. Finally according to formula D = 2- (gamma-1)/2 estimate fractal dimension of HRV signal. To validate the stability and reliability of the proposed method, using fractional brown movement simulate 24 fractal signals that fractal value is 1.6 to validate, the result shows that the method has stability and reliability.

  7. Remote sensing-based estimation of annual soil respiration at two contrasting forest sites

    NASA Astrophysics Data System (ADS)

    Huang, Ni; Gu, Lianhong; Black, T. Andrew; Wang, Li; Niu, Zheng

    2015-11-01

    Soil respiration (Rs), an important component of the global carbon cycle, can be estimated using remotely sensed data, but the accuracy of this technique has not been thoroughly investigated. In this study, we proposed a methodology for the remote estimation of annual Rs at two contrasting FLUXNET forest sites (a deciduous broadleaf forest and an evergreen needleleaf forest). A version of the Akaike's information criterion was used to select the best model from a range of models for annual Rs estimation based on the remotely sensed data products from the Moderate Resolution Imaging Spectroradiometer and root-zone soil moisture product derived from assimilation of the NASA Advanced Microwave Scanning Radiometer soil moisture products and a two-layer Palmer water balance model. We found that the Arrhenius-type function based on nighttime land surface temperature (LST-night) was the best model by comprehensively considering the model explanatory power and model complexity at the Missouri Ozark and BC-Campbell River 1949 Douglas-fir sites. In addition, a multicollinearity problem among LST-night, root-zone soil moisture, and plant photosynthesis factor was effectively avoided by selecting the LST-night-driven model. Cross validation showed that temporal variation in Rs was captured by the LST-night-driven model with a mean absolute error below 1 µmol CO2 m-2 s-1 at both forest sites. An obvious overestimation that occurred in 2005 and 2007 at the Missouri Ozark site reduced the evaluation accuracy of cross validation because of summer drought. However, no significant difference was found between the Arrhenius-type function driven by LST-night and the function considering LST-night and root-zone soil moisture. This finding indicated that the contribution of soil moisture to Rs was relatively small at our multiyear data set. To predict intersite Rs, maximum leaf area index (LAImax) was used as an upscaling factor to calibrate the site-specific reference respiration

  8. On the validity of time-dependent AUC estimators.

    PubMed

    Schmid, Matthias; Kestler, Hans A; Potapov, Sergej

    2015-01-01

    Recent developments in molecular biology have led to the massive discovery of new marker candidates for the prediction of patient survival. To evaluate the predictive value of these markers, statistical tools for measuring the performance of survival models are needed. We consider estimators of discrimination measures, which are a popular approach to evaluate survival predictions in biomarker studies. Estimators of discrimination measures are usually based on regularity assumptions such as the proportional hazards assumption. Based on two sets of molecular data and a simulation study, we show that violations of the regularity assumptions may lead to over-optimistic estimates of prediction accuracy and may therefore result in biased conclusions regarding the clinical utility of new biomarkers. In particular, we demonstrate that biased medical decision making is possible even if statistical checks indicate that all regularity assumptions are satisfied. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  9. Development of robust flexible OLED encapsulations using simulated estimations and experimental validations

    NASA Astrophysics Data System (ADS)

    Lee, Chang-Chun; Shih, Yan-Shin; Wu, Chih-Sheng; Tsai, Chia-Hao; Yeh, Shu-Tang; Peng, Yi-Hao; Chen, Kuang-Jung

    2012-07-01

    This work analyses the overall stress/strain characteristic of flexible encapsulations with organic light-emitting diode (OLED) devices. A robust methodology composed of a mechanical model of multi-thin film under bending loads and related stress simulations based on nonlinear finite element analysis (FEA) is proposed, and validated to be more reliable compared with related experimental data. With various geometrical combinations of cover plate, stacked thin films and plastic substrate, the position of the neutral axis (NA) plate, which is regarded as a key design parameter to minimize stress impact for the concerned OLED devices, is acquired using the present methodology. The results point out that both the thickness and mechanical properties of the cover plate help in determining the NA location. In addition, several concave and convex radii are applied to examine the reliable mechanical tolerance and to provide an insight into the estimated reliability of foldable OLED encapsulations.

  10. A novel body circumferences-based estimation of percentage body fat.

    PubMed

    Lahav, Yair; Epstein, Yoram; Kedem, Ron; Schermann, Haggai

    2018-03-01

    Anthropometric measures of body composition are often used for rapid and cost-effective estimation of percentage body fat (%BF) in field research, serial measurements and screening. Our aim was to develop a validated estimate of %BF for the general population, based on simple body circumferences measures. The study cohort consisted of two consecutive samples of health club members, designated as 'development' (n 476, 61 % men, 39 % women) and 'validation' (n 224, 50 % men, 50 % women) groups. All subjects underwent anthropometric measurements as part of their registration to a health club. Dual-energy X-ray absorptiometry (DEXA) scan was used as the 'gold standard' estimate of %BF. Linear regressions where used to construct the predictive equation (%BFcal). Bland-Altman statistics, Lin concordance coefficients and percentage of subjects falling within 5 % of %BF estimate by DEXA were used to evaluate accuracy and precision of the equation. The variance inflation factor was used to check multicollinearity. Two distinct equations were developed for men and women: %BFcal (men)=10·1-0·239H+0·8A-0·5N; %BFcal (women)=19·2-0·239H+0·8A-0·5N (H, height; A, abdomen; N, neck, all in cm). Bland-Altman differences were randomly distributed and showed no fixed bias. Lin concordance coefficients of %BFcal were 0·89 in men and 0·86 in women. About 79·5 % of %BF predictions in both sexes were within ±5 % of the DEXA value. The Durnin-Womersley skinfolds equation was less accurate in our study group for prediction of %BF than %BFcal. We conclude that %BFcal offers the advantage of obtaining a reliable estimate of %BF from simple measurements that require no sophisticated tools and only a minimal prior training and experience.

  11. A new validation technique for estimations of body segment inertia tensors: Principal axes of inertia do matter.

    PubMed

    Rossi, Marcel M; Alderson, Jacqueline; El-Sallam, Amar; Dowling, James; Reinbolt, Jeffrey; Donnelly, Cyril J

    2016-12-08

    The aims of this study were to: (i) establish a new criterion method to validate inertia tensor estimates by setting the experimental angular velocity data of an airborne objects as ground truth against simulations run with the estimated tensors, and (ii) test the sensitivity of the simulations to changes in the inertia tensor components. A rigid steel cylinder was covered with reflective kinematic markers and projected through a calibrated motion capture volume. Simulations of the airborne motion were run with two models, using inertia tensor estimated with geometric formula or the compound pendulum technique. The deviation angles between experimental (ground truth) and simulated angular velocity vectors and the root mean squared deviation angle were computed for every simulation. Monte Carlo analyses were performed to assess the sensitivity of simulations to changes in magnitude of principal moments of inertia within ±10% and to changes in orientation of principal axes of inertia within ±10° (of the geometric-based inertia tensor). Root mean squared deviation angles ranged between 2.9° and 4.3° for the inertia tensor estimated geometrically, and between 11.7° and 15.2° for the compound pendulum values. Errors up to 10% in magnitude of principal moments of inertia yielded root mean squared deviation angles ranging between 3.2° and 6.6°, and between 5.5° and 7.9° when lumped with errors of 10° in principal axes of inertia orientation. The proposed technique can effectively validate inertia tensors from novel estimation methods of body segment inertial parameter. Principal axes of inertia orientation should not be neglected when modelling human/animal mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. A Temperature-Based Model for Estimating Monthly Average Daily Global Solar Radiation in China

    PubMed Central

    Li, Huashan; Cao, Fei; Wang, Xianlong; Ma, Weibin

    2014-01-01

    Since air temperature records are readily available around the world, the models based on air temperature for estimating solar radiation have been widely accepted. In this paper, a new model based on Hargreaves and Samani (HS) method for estimating monthly average daily global solar radiation is proposed. With statistical error tests, the performance of the new model is validated by comparing with the HS model and its two modifications (Samani model and Chen model) against the measured data at 65 meteorological stations in China. Results show that the new model is more accurate and robust than the HS, Samani, and Chen models in all climatic regions, especially in the humid regions. Hence, the new model can be recommended for estimating solar radiation in areas where only air temperature data are available in China. PMID:24605046

  13. Postnatal gestational age estimation using newborn screening blood spots: a proposed validation protocol

    PubMed Central

    Murphy, Malia S Q; Hawken, Steven; Atkinson, Katherine M; Milburn, Jennifer; Pervin, Jesmin; Gravett, Courtney; Stringer, Jeffrey S A; Rahman, Anisur; Lackritz, Eve; Chakraborty, Pranesh; Wilson, Kumanan

    2017-01-01

    Background Knowledge of gestational age (GA) is critical for guiding neonatal care and quantifying regional burdens of preterm birth. In settings where access to ultrasound dating is limited, postnatal estimates are frequently used despite the issues of accuracy associated with postnatal approaches. Newborn metabolic profiles are known to vary by severity of preterm birth. Recent work by our group and others has highlighted the accuracy of postnatal GA estimation algorithms derived from routinely collected newborn screening profiles. This protocol outlines the validation of a GA model originally developed in a North American cohort among international newborn cohorts. Methods Our primary objective is to use blood spot samples collected from infants born in Zambia and Bangladesh to evaluate our algorithm’s capacity to correctly classify GA within 1, 2, 3 and 4 weeks. Secondary objectives are to 1) determine the algorithm's accuracy in small-for-gestational-age and large-for-gestational-age infants, 2) determine its ability to correctly discriminate GA of newborns across dichotomous thresholds of preterm birth (≤34 weeks, <37 weeks GA) and 3) compare the relative performance of algorithms derived from newborn screening panels including all available analytes and those restricted to analyte subsets. The study population will consist of infants born to mothers already enrolled in one of two preterm birth cohorts in Lusaka, Zambia, and Matlab, Bangladesh. Dried blood spot samples will be collected and sent for analysis in Ontario, Canada, for model validation. Discussion This study will determine the validity of a GA estimation algorithm across ethnically diverse infant populations and assess population specific variations in newborn metabolic profiles. PMID:29104765

  14. A Web Application for Validating and Disseminating Surface Energy Balance Evapotranspiration Estimates for Hydrologic Modeling Applications

    NASA Astrophysics Data System (ADS)

    Schneider, C. A.; Aggett, G. R.; Nevo, A.; Babel, N. C.; Hattendorf, M. J.

    2008-12-01

    The western United States face an increasing threat from drought - and the social, economic, and environmental impacts that come with it. The combination of diminished water supplies along with increasing demand for urban and other uses is rapidly depleting surface and ground water reserves traditionally allocated for agricultural use. Quantification of water consumptive use is increasingly important as water resources are placed under growing tension by increased users and interests. Scarce water supplies can be managed more efficiently through use of information and prediction tools accessible via the internet. METRIC (Mapping ET at high Resolution with Internalized Calibration) represents a maturing technology for deriving a remote sensing-based surface energy balance for estimating ET from the earth's surface. This technology has the potential to become widely adopted and used by water resources communities providing critical support to a host of water decision support tools. ET images created using METRIC or similar remote- sensing based processing systems could be routinely used as input to operational and planning models for water demand forecasting, reservoir operations, ground-water management, irrigation water supply planning, water rights regulation, and for the improvement, validation, and use of hydrological models. The ET modeling and subsequent validation and distribution of results via the web presented here provides a vehicle through which METRIC ET parameters can be made more accessible to hydrologic modelers. It will enable users of the data to assess the results of the spatially distributed ET modeling and compare with results from conventional ET estimation methods prior to assimilation in surface and ground water models. In addition, this ET-Server application will provide rapid and transparent access to the data enabling quantification of uncertainties due to errors in temporal sampling and METRIC modeling, while the GIS-based analytical

  15. Situating Standard Setting within Argument-Based Validity

    ERIC Educational Resources Information Center

    Papageorgiou, Spiros; Tannenbaum, Richard J.

    2016-01-01

    Although there has been substantial work on argument-based approaches to validation as well as standard-setting methodologies, it might not always be clear how standard setting fits into argument-based validity. The purpose of this article is to address this lack in the literature, with a specific focus on topics related to argument-based…

  16. Validation of TRMM precipitation radar monthly rainfall estimates over Brazil

    NASA Astrophysics Data System (ADS)

    Franchito, Sergio H.; Rao, V. Brahmananda; Vasques, Ana C.; Santo, Clovis M. E.; Conforte, Jorge C.

    2009-01-01

    In an attempt to validate the Tropical Rainfall Measuring Mission (TRMM) precipitation radar (PR) over Brazil, TRMM PR estimates are compared with rain gauge station data from Agência Nacional de Energia Elétrica (ANEEL). The analysis is conducted on a seasonal basis and considers five geographic regions with different precipitation regimes. The results showed that TRMM PR seasonal rainfall is well correlated with ANEEL rainfall (correlation coefficients are significant at the 99% confidence level) over most of Brazil. The random and systematic errors of TRMM PR are sensitive to seasonal and regional differences. During December to February and March to May, TRMM PR rainfall is reliable over Brazil. In June to August (September to November) TRMM PR estimates are only reliable in the Amazonian and southern (Amazonian and southeastern) regions. In the other regions the relative RMS errors are larger than 50%, indicating that the random errors are high.

  17. Adjustment and validation of a simulation tool for CSP plants based on parabolic trough technology

    NASA Astrophysics Data System (ADS)

    García-Barberena, Javier; Ubani, Nora

    2016-05-01

    The present work presents the validation process carried out for a simulation tool especially designed for the energy yield assessment of concentrating solar plants based on parabolic through (PT) technology. The validation has been carried out by comparing the model estimations with real data collected from a commercial CSP plant. In order to adjust the model parameters used for the simulation, 12 different days were selected among one-year of operational data measured at the real plant. The 12 days were simulated and the estimations compared with the measured data, focusing on the most important variables from the simulation point of view: temperatures, pressures and mass flow of the solar field, gross power, parasitic power, and net power delivered by the plant. Based on these 12 days, the key parameters for simulating the model were properly fixed and the simulation of a whole year performed. The results obtained for a complete year simulation showed very good agreement for the gross and net electric total production. The estimations for these magnitudes show a 1.47% and 2.02% BIAS respectively. The results proved that the simulation software describes with great accuracy the real operation of the power plant and correctly reproduces its transient behavior.

  18. Precipitation Estimate Using NEXRAD Ground-Based Radar Images: Validation, Calibration and Spatial Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xuesong

    2012-12-17

    Precipitation is an important input variable for hydrologic and ecological modeling and analysis. Next Generation Radar (NEXRAD) can provide precipitation products that cover most of the continental United States with a high resolution display of approximately 4 × 4 km2. Two major issues concerning the applications of NEXRAD data are (1) lack of a NEXRAD geo-processing and geo-referencing program and (2) bias correction of NEXRAD estimates. In this chapter, a geographic information system (GIS) based software that can automatically support processing of NEXRAD data for hydrologic and ecological models is presented. Some geostatistical approaches to calibrating NEXRAD data using rainmore » gauge data are introduced, and two case studies on evaluating accuracy of NEXRAD Multisensor Precipitation Estimator (MPE) and calibrating MPE with rain-gauge data are presented. The first case study examines the performance of MPE in mountainous region versus south plains and cold season versus warm season, as well as the effect of sub-grid variability and temporal scale on NEXRAD performance. From the results of the first case study, performance of MPE was found to be influenced by complex terrain, frozen precipitation, sub-grid variability, and temporal scale. Overall, the assessment of MPE indicates the importance of removing bias of the MPE precipitation product before its application, especially in the complex mountainous region. The second case study examines the performance of three MPE calibration methods using rain gauge observations in the Little River Experimental Watershed in Georgia. The comparison results show that no one method can perform better than the others in terms of all evaluation coefficients and for all time steps. For practical estimation of precipitation distribution, implementation of multiple methods to predict spatial precipitation is suggested.« less

  19. Determination of the criterion-related validity of hip joint angle test for estimating hamstring flexibility using a contemporary statistical approach.

    PubMed

    Sainz de Baranda, Pilar; Rodríguez-Iniesta, María; Ayala, Francisco; Santonja, Fernando; Cejudo, Antonio

    2014-07-01

    To examine the criterion-related validity of the horizontal hip joint angle (H-HJA) test and vertical hip joint angle (V-HJA) test for estimating hamstring flexibility measured through the passive straight-leg raise (PSLR) test using contemporary statistical measures. Validity study. Controlled laboratory environment. One hundred thirty-eight professional trampoline gymnasts (61 women and 77 men). Hamstring flexibility. Each participant performed 2 trials of H-HJA, V-HJA, and PSLR tests in a randomized order. The criterion-related validity of H-HJA and V-HJA tests was measured through the estimation equation, typical error of the estimate (TEEST), validity correlation (β), and their respective confidence limits. The findings from this study suggest that although H-HJA and V-HJA tests showed moderate to high validity scores for estimating hamstring flexibility (standardized TEEST = 0.63; β = 0.80), the TEEST statistic reported for both tests was not narrow enough for clinical purposes (H-HJA = 10.3 degrees; V-HJA = 9.5 degrees). Subsequently, the predicted likely thresholds for the true values that were generated were too wide (H-HJA = predicted value ± 13.2 degrees; V-HJA = predicted value ± 12.2 degrees). The results suggest that although the HJA test showed moderate to high validity scores for estimating hamstring flexibility, the prediction intervals between the HJA and PSLR tests are not strong enough to suggest that clinicians and sport medicine practitioners should use the HJA and PSLR tests interchangeably as gold standard measurement tools to evaluate and detect short hamstring muscle flexibility.

  20. Targeted estimation of nuisance parameters to obtain valid statistical inference.

    PubMed

    van der Laan, Mark J

    2014-01-01

    In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special

  1. Development and validation of a food-based diet quality index for New Zealand adolescents

    PubMed Central

    2013-01-01

    Background As there is no population-specific, simple food-based diet index suitable for examination of diet quality in New Zealand (NZ) adolescents, there is a need to develop such a tool. Therefore, this study aimed to develop an adolescent-specific diet quality index based on dietary information sourced from a Food Questionnaire (FQ) and examine its validity relative to a four-day estimated food record (4DFR) obtained from a group of adolescents aged 14 to 18 years. Methods A diet quality index for NZ adolescents (NZDQI-A) was developed based on ‘Adequacy’ and ‘Variety’ of five food groups reflecting the New Zealand Food and Nutrition Guidelines for Healthy Adolescents. The NZDQI-A was scored from zero to 100, with a higher score reflecting a better diet quality. Forty-one adolescents (16 males, 25 females, aged 14–18 years) each completed the FQ and a 4DFR. The test-retest reliability of the FQ-derived NZDQI-A scores over a two-week period and the relative validity of the scores compared to the 4DFR were estimated using Pearson’s correlations. Construct validity was examined by comparing NZDQI-A scores against nutrient intakes obtained from the 4DFR. Results The NZDQI-A derived from the FQ showed good reliability (r = 0.65) and reasonable agreement with 4DFR in ranking participants by scores (r = 0.39). More than half of the participants were classified into the same thirds of scores while 10% were misclassified into the opposite thirds by the two methods. Higher NZDQI-A scores were also associated with lower total fat and saturated fat intakes and higher iron intakes. Conclusions Higher NZDQI-A scores were associated with more desirable fat and iron intakes. The scores derived from either FQ or 4DFR were comparable and reproducible when repeated within two weeks. The NZDQI-A is relatively valid and reliable in ranking diet quality in adolescents at a group level even in a small sample size. Further studies are required to test the

  2. Validation of abundance estimates from mark–recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    USGS Publications Warehouse

    Rosenberger, Amanda E.; Dunham, Jason B.

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.

  3. Estimating Evapotranspiration Using an Observation Based Terrestrial Water Budget

    NASA Technical Reports Server (NTRS)

    Rodell, Matthew; McWilliams, Eric B.; Famiglietti, James S.; Beaudoing, Hiroko K.; Nigro, Joseph

    2011-01-01

    Evapotranspiration (ET) is difficult to measure at the scales of climate models and climate variability. While satellite retrieval algorithms do exist, their accuracy is limited by the sparseness of in situ observations available for calibration and validation, which themselves may be unrepresentative of 500m and larger scale satellite footprints and grid pixels. Here, we use a combination of satellite and ground-based observations to close the water budgets of seven continental scale river basins (Mackenzie, Fraser, Nelson, Mississippi, Tocantins, Danube, and Ubangi), estimating mean ET as a residual. For any river basin, ET must equal total precipitation minus net runoff minus the change in total terrestrial water storage (TWS), in order for mass to be conserved. We make use of precipitation from two global observation-based products, archived runoff data, and TWS changes from the Gravity Recovery and Climate Experiment satellite mission. We demonstrate that while uncertainty in the water budget-based estimates of monthly ET is often too large for those estimates to be useful, the uncertainty in the mean annual cycle is small enough that it is practical for evaluating other ET products. Here, we evaluate five land surface model simulations, two operational atmospheric analyses, and a recent global reanalysis product based on our results. An important outcome is that the water budget-based ET time series in two tropical river basins, one in Brazil and the other in central Africa, exhibit a weak annual cycle, which may help to resolve debate about the strength of the annual cycle of ET in such regions and how ET is constrained throughout the year. The methods described will be useful for water and energy budget studies, weather and climate model assessments, and satellite-based ET retrieval optimization.

  4. Development and validation of a melanoma risk score based on pooled data from 16 case-control studies

    PubMed Central

    Davies, John R; Chang, Yu-mei; Bishop, D Timothy; Armstrong, Bruce K; Bataille, Veronique; Bergman, Wilma; Berwick, Marianne; Bracci, Paige M; Elwood, J Mark; Ernstoff, Marc S; Green, Adele; Gruis, Nelleke A; Holly, Elizabeth A; Ingvar, Christian; Kanetsky, Peter A; Karagas, Margaret R; Lee, Tim K; Le Marchand, Loïc; Mackie, Rona M; Olsson, Håkan; Østerlind, Anne; Rebbeck, Timothy R; Reich, Kristian; Sasieni, Peter; Siskind, Victor; Swerdlow, Anthony J; Titus, Linda; Zens, Michael S; Ziegler, Andreas; Gallagher, Richard P.; Barrett, Jennifer H; Newton-Bishop, Julia

    2015-01-01

    Background We report the development of a cutaneous melanoma risk algorithm based upon 7 factors; hair colour, skin type, family history, freckling, nevus count, number of large nevi and history of sunburn, intended to form the basis of a self-assessment webtool for the general public. Methods Predicted odds of melanoma were estimated by analysing a pooled dataset from 16 case-control studies using logistic random coefficients models. Risk categories were defined based on the distribution of the predicted odds in the controls from these studies. Imputation was used to estimate missing data in the pooled datasets. The 30th, 60th and 90th centiles were used to distribute individuals into four risk groups for their age, sex and geographic location. Cross-validation was used to test the robustness of the thresholds for each group by leaving out each study one by one. Performance of the model was assessed in an independent UK case-control study dataset. Results Cross-validation confirmed the robustness of the threshold estimates. Cases and controls were well discriminated in the independent dataset (area under the curve 0.75, 95% CI 0.73-0.78). 29% of cases were in the highest risk group compared with 7% of controls, and 43% of controls were in the lowest risk group compared with 13% of cases. Conclusion We have identified a composite score representing an estimate of relative risk and successfully validated this score in an independent dataset. Impact This score may be a useful tool to inform members of the public about their melanoma risk. PMID:25713022

  5. Code-based Diagnostic Algorithms for Idiopathic Pulmonary Fibrosis. Case Validation and Improvement.

    PubMed

    Ley, Brett; Urbania, Thomas; Husson, Gail; Vittinghoff, Eric; Brush, David R; Eisner, Mark D; Iribarren, Carlos; Collard, Harold R

    2017-06-01

    Population-based studies of idiopathic pulmonary fibrosis (IPF) in the United States have been limited by reliance on diagnostic code-based algorithms that lack clinical validation. To validate a well-accepted International Classification of Diseases, Ninth Revision, code-based algorithm for IPF using patient-level information and to develop a modified algorithm for IPF with enhanced predictive value. The traditional IPF algorithm was used to identify potential cases of IPF in the Kaiser Permanente Northern California adult population from 2000 to 2014. Incidence and prevalence were determined overall and by age, sex, and race/ethnicity. A validation subset of cases (n = 150) underwent expert medical record and chest computed tomography review. A modified IPF algorithm was then derived and validated to optimize positive predictive value. From 2000 to 2014, the traditional IPF algorithm identified 2,608 cases among 5,389,627 at-risk adults in the Kaiser Permanente Northern California population. Annual incidence was 6.8/100,000 person-years (95% confidence interval [CI], 6.1-7.7) and was higher in patients with older age, male sex, and white race. The positive predictive value of the IPF algorithm was only 42.2% (95% CI, 30.6 to 54.6%); sensitivity was 55.6% (95% CI, 21.2 to 86.3%). The corrected incidence was estimated at 5.6/100,000 person-years (95% CI, 2.6-10.3). A modified IPF algorithm had improved positive predictive value but reduced sensitivity compared with the traditional algorithm. A well-accepted International Classification of Diseases, Ninth Revision, code-based IPF algorithm performs poorly, falsely classifying many non-IPF cases as IPF and missing a substantial proportion of IPF cases. A modification of the IPF algorithm may be useful for future population-based studies of IPF.

  6. Validation of Walk Score® for Estimating Neighborhood Walkability: An Analysis of Four US Metropolitan Areas

    PubMed Central

    Duncan, Dustin T.; Aldstadt, Jared; Whalen, John; Melly, Steven J.; Gortmaker, Steven L.

    2011-01-01

    Neighborhood walkability can influence physical activity. We evaluated the validity of Walk Score® for assessing neighborhood walkability based on GIS (objective) indicators of neighborhood walkability with addresses from four US metropolitan areas with several street network buffer distances (i.e., 400-, 800-, and 1,600-meters). Address data come from the YMCA-Harvard After School Food and Fitness Project, an obesity prevention intervention involving children aged 5–11 years and their families participating in YMCA-administered, after-school programs located in four geographically diverse metropolitan areas in the US (n = 733). GIS data were used to measure multiple objective indicators of neighborhood walkability. Walk Scores were also obtained for the participant’s residential addresses. Spearman correlations between Walk Scores and the GIS neighborhood walkability indicators were calculated as well as Spearman correlations accounting for spatial autocorrelation. There were many significant moderate correlations between Walk Scores and the GIS neighborhood walkability indicators such as density of retail destinations and intersection density (p < 0.05). The magnitude varied by the GIS indicator of neighborhood walkability. Correlations generally became stronger with a larger spatial scale, and there were some geographic differences. Walk Score® is free and publicly available for public health researchers and practitioners. Results from our study suggest that Walk Score® is a valid measure of estimating certain aspects of neighborhood walkability, particularly at the 1600-meter buffer. As such, our study confirms and extends the generalizability of previous findings demonstrating that Walk Score is a valid measure of estimating neighborhood walkability in multiple geographic locations and at multiple spatial scales. PMID:22163200

  7. A Novel Continuous Blood Pressure Estimation Approach Based on Data Mining Techniques.

    PubMed

    Miao, Fen; Fu, Nan; Zhang, Yuan-Ting; Ding, Xiao-Rong; Hong, Xi; He, Qingyun; Li, Ye

    2017-11-01

    Continuous blood pressure (BP) estimation using pulse transit time (PTT) is a promising method for unobtrusive BP measurement. However, the accuracy of this approach must be improved for it to be viable for a wide range of applications. This study proposes a novel continuous BP estimation approach that combines data mining techniques with a traditional mechanism-driven model. First, 14 features derived from simultaneous electrocardiogram and photoplethysmogram signals were extracted for beat-to-beat BP estimation. A genetic algorithm-based feature selection method was then used to select BP indicators for each subject. Multivariate linear regression and support vector regression were employed to develop the BP model. The accuracy and robustness of the proposed approach were validated for static, dynamic, and follow-up performance. Experimental results based on 73 subjects showed that the proposed approach exhibited excellent accuracy in static BP estimation, with a correlation coefficient and mean error of 0.852 and -0.001 ± 3.102 mmHg for systolic BP, and 0.790 and -0.004 ± 2.199 mmHg for diastolic BP. Similar performance was observed for dynamic BP estimation. The robustness results indicated that the estimation accuracy was lower by a certain degree one day after model construction but was relatively stable from one day to six months after construction. The proposed approach is superior to the state-of-the-art PTT-based model for an approximately 2-mmHg reduction in the standard derivation at different time intervals, thus providing potentially novel insights for cuffless BP estimation.

  8. Gait Phase Estimation Based on Noncontact Capacitive Sensing and Adaptive Oscillators.

    PubMed

    Zheng, Enhao; Manca, Silvia; Yan, Tingfang; Parri, Andrea; Vitiello, Nicola; Wang, Qining

    2017-10-01

    This paper presents a novel strategy aiming to acquire an accurate and walking-speed-adaptive estimation of the gait phase through noncontact capacitive sensing and adaptive oscillators (AOs). The capacitive sensing system is designed with two sensing cuffs that can measure the leg muscle shape changes during walking. The system can be dressed above the clothes and free human skin from contacting to electrodes. In order to track the capacitance signals, the gait phase estimator is designed based on the AO dynamic system due to its ability of synchronizing with quasi-periodic signals. After the implementation of the whole system, we first evaluated the offline estimation performance by experiments with 12 healthy subjects walking on a treadmill with changing speeds. The strategy achieved an accurate and consistent gait phase estimation with only one channel of capacitance signal. The average root-mean-square errors in one stride were 0.19 rad (3.0% of one gait cycle) for constant walking speeds and 0.31 rad (4.9% of one gait cycle) for speed transitions even after the subjects rewore the sensing cuffs. We then validated our strategy in a real-time gait phase estimation task with three subjects walking with changing speeds. Our study indicates that the strategy based on capacitive sensing and AOs is a promising alternative for the control of exoskeleton/orthosis.

  9. Mechanical energy estimation during walking: validity and sensitivity in typical gait and in children with cerebral palsy.

    PubMed

    Van de Walle, P; Hallemans, A; Schwartz, M; Truijen, S; Gosselink, R; Desloovere, K

    2012-02-01

    Gait efficiency in children with cerebral palsy is usually quantified by metabolic energy expenditure. Mechanical energy estimations, however, can be a valuable supplement as they can be assessed during gait analysis and plotted over the gait cycle, thus revealing information on timing and sources of increases in energy expenditure. Unfortunately, little information on validity and sensitivity exists. Three mechanical estimation approaches: (1) centre of mass (CoM) approach, (2) sum of segmental energies (SSE) approach and (3) integrated joint power approach, were validated against oxygen consumption and each other. Sensitivity was assessed in typical gait and in children with diplegia. CoM approach underestimated total energy expenditure and showed poor sensitivity. SSE approach overestimated energy expenditure and showed acceptable sensitivity. Validity and sensitivity were best in the integrated joint power approach. This method is therefore preferred for mechanical energy estimation in children with diplegia. However, mechanical energy should supplement, not replace metabolic energy, as total energy expended is not captured in any mechanical approach. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Neural Network-Based Sensor Validation for Turboshaft Engines

    NASA Technical Reports Server (NTRS)

    Moller, James C.; Litt, Jonathan S.; Guo, Ten-Huei

    1998-01-01

    Sensor failure detection, isolation, and accommodation using a neural network approach is described. An auto-associative neural network is configured to perform dimensionality reduction on the sensor measurement vector and provide estimated sensor values. The sensor validation scheme is applied in a simulation of the T700 turboshaft engine in closed loop operation. Performance is evaluated based on the ability to detect faults correctly and maintain stable and responsive engine operation. The set of sensor outputs used for engine control forms the network input vector. Analytical redundancy is verified by training networks of successively smaller bottleneck layer sizes. Training data generation and strategy are discussed. The engine maintained stable behavior in the presence of sensor hard failures. With proper selection of fault determination thresholds, stability was maintained in the presence of sensor soft failures.

  11. An automatic iris occlusion estimation method based on high-dimensional density estimation.

    PubMed

    Li, Yung-Hui; Savvides, Marios

    2013-04-01

    Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation.

  12. Assessing the external validity of algorithms to estimate EQ-5D-3L from the WOMAC.

    PubMed

    Kiadaliri, Aliasghar A; Englund, Martin

    2016-10-04

    The use of mapping algorithms have been suggested as a solution to predict health utilities when no preference-based measure is included in the study. However, validity and predictive performance of these algorithms are highly variable and hence assessing the accuracy and validity of algorithms before use them in a new setting is of importance. The aim of the current study was to assess the predictive accuracy of three mapping algorithms to estimate the EQ-5D-3L from the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) among Swedish people with knee disorders. Two of these algorithms developed using ordinary least squares (OLS) models and one developed using mixture model. The data from 1078 subjects mean (SD) age 69.4 (7.2) years with frequent knee pain and/or knee osteoarthritis from the Malmö Osteoarthritis study in Sweden were used. The algorithms' performance was assessed using mean error, mean absolute error, and root mean squared error. Two types of prediction were estimated for mixture model: weighted average (WA), and conditional on estimated component (CEC). The overall mean was overpredicted by an OLS model and underpredicted by two other algorithms (P < 0.001). All predictions but the CEC predictions of mixture model had a narrower range than the observed scores (22 to 90 %). All algorithms suffered from overprediction for severe health states and underprediction for mild health states with lesser extent for mixture model. While the mixture model outperformed OLS models at the extremes of the EQ-5D-3D distribution, it underperformed around the center of the distribution. While algorithm based on mixture model reflected the distribution of EQ-5D-3L data more accurately compared with OLS models, all algorithms suffered from systematic bias. This calls for caution in applying these mapping algorithms in a new setting particularly in samples with milder knee problems than original sample. Assessing the impact of the choice of

  13. Strain Rate Tensor Estimation in Cine Cardiac MRI Based on Elastic Image Registration

    NASA Astrophysics Data System (ADS)

    Sánchez-Ferrero, Gonzalo Vegas; Vega, Antonio Tristán; Grande, Lucilio Cordero; de La Higuera, Pablo Casaseca; Fernández, Santiago Aja; Fernández, Marcos Martín; López, Carlos Alberola

    In this work we propose an alternative method to estimate and visualize the Strain Rate Tensor (SRT) in Magnetic Resonance Images (MRI) when Phase Contrast MRI (PCMRI) and Tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, image registration algorithms are used to estimate the movement of the myocardium at each point. Additionally, a consistency checking method is presented to validate the accuracy of the estimates when no golden standard is available. Results prove that the consistency checking method provides an upper bound of the mean squared error of the estimate. Our experiments with real data show that the registration algorithm provides a useful deformation field to estimate the SRT fields. A classification between regional normal and dysfunctional contraction patterns, as compared with experts diagnosis, points out that the parameters extracted from the estimated SRT can represent these patterns. Additionally, a scheme for visualizing and analyzing the local behavior of the SRT field is presented.

  14. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.

    PubMed

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil

    2014-08-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.

  15. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods

    PubMed Central

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil

    2015-01-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called “Patient Recursive Survival Peeling” is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called “combined” cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication. PMID:26997922

  16. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    NASA Technical Reports Server (NTRS)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  17. Rediscovery rate estimation for assessing the validation of significant findings in high-throughput studies.

    PubMed

    Ganna, Andrea; Lee, Donghwan; Ingelsson, Erik; Pawitan, Yudi

    2015-07-01

    It is common and advised practice in biomedical research to validate experimental or observational findings in a population different from the one where the findings were initially assessed. This practice increases the generalizability of the results and decreases the likelihood of reporting false-positive findings. Validation becomes critical when dealing with high-throughput experiments, where the large number of tests increases the chance to observe false-positive results. In this article, we review common approaches to determine statistical thresholds for validation and describe the factors influencing the proportion of significant findings from a 'training' sample that are replicated in a 'validation' sample. We refer to this proportion as rediscovery rate (RDR). In high-throughput studies, the RDR is a function of false-positive rate and power in both the training and validation samples. We illustrate the application of the RDR using simulated data and real data examples from metabolomics experiments. We further describe an online tool to calculate the RDR using t-statistics. We foresee two main applications. First, if the validation study has not yet been collected, the RDR can be used to decide the optimal combination between the proportion of findings taken to validation and the size of the validation study. Secondly, if a validation study has already been done, the RDR estimated using the training data can be compared with the observed RDR from the validation data; hence, the success of the validation study can be assessed. © The Author 2014. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  18. Validation of a scenario-based assessment of critical thinking using an externally validated tool.

    PubMed

    Buur, Jennifer L; Schmidt, Peggy; Smylie, Dean; Irizarry, Kris; Crocker, Carlos; Tyler, John; Barr, Margaret

    2012-01-01

    With medical education transitioning from knowledge-based curricula to competency-based curricula, critical thinking skills have emerged as a major competency. While there are validated external instruments for assessing critical thinking, many educators have created their own custom assessments of critical thinking. However, the face validity of these assessments has not been challenged. The purpose of this study was to compare results from a custom assessment of critical thinking with the results from a validated external instrument of critical thinking. Students from the College of Veterinary Medicine at Western University of Health Sciences were administered a custom assessment of critical thinking (ACT) examination and the externally validated instrument, California Critical Thinking Skills Test (CCTST), in the spring of 2011. Total scores and sub-scores from each exam were analyzed for significant correlations using Pearson correlation coefficients. Significant correlations between ACT Blooms 2 and deductive reasoning and total ACT score and deductive reasoning were demonstrated with correlation coefficients of 0.24 and 0.22, respectively. No other statistically significant correlations were found. The lack of significant correlation between the two examinations illustrates the need in medical education to externally validate internal custom assessments. Ultimately, the development and validation of custom assessments of non-knowledge-based competencies will produce higher quality medical professionals.

  19. Development of a Reference Data Set (RDS) for dental age estimation (DAE) and testing of this with a separate Validation Set (VS) in a southern Chinese population.

    PubMed

    Jayaraman, Jayakumar; Wong, Hai Ming; King, Nigel M; Roberts, Graham J

    2016-10-01

    Many countries have recently experienced a rapid increase in the demand for forensic age estimates of unaccompanied minors. Hong Kong is a major tourist and business center where there has been an increase in the number of people intercepted with false travel documents. An accurate estimation of age is only possible when a dataset for age estimation that has been derived from the corresponding ethnic population. Thus, the aim of this study was to develop and validate a Reference Data Set (RDS) for dental age estimation for southern Chinese. A total of 2306 subjects were selected from the patient archives of a large dental hospital and the chronological age for each subject was recorded. This age was assigned to each specific stage of dental development for each tooth to create a RDS. To validate this RDS, a further 484 subjects were randomly chosen from the patient archives and their dental age was assessed based on the scores from the RDS. Dental age was estimated using meta-analysis command corresponding to random effects statistical model. Chronological age (CA) and Dental Age (DA) were compared using the paired t-test. The overall difference between the chronological and dental age (CA-DA) was 0.05 years (2.6 weeks) for males and 0.03 years (1.6 weeks) for females. The paired t-test indicated that there was no statistically significant difference between the chronological and dental age (p > 0.05). The validated southern Chinese reference dataset based on dental maturation accurately estimated the chronological age. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  20. Satellite-based estimation of evapotranspiration in typical forests of China

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Li, R.

    2017-12-01

    Evapotranspiration (ET) is the key process affecting the interaction between land surface and atmosphere. Satellite remote sensing is the only feasible technique to monitor the terrestrial ET on large scale. Microwave Emissivity Difference Vegetation Index (EDVI) indicates vegetation water content and can be retrieved under both clear and cloudy sky. Based on EDVI, a quantitative algorithm for ET estimation in China was developed. In this study, we improved the EDVI-based ET algorithm by using the datasets from multiple platforms, including Moderate-Resolution Imaging Spectroradiometer (MODIS), Clouds and Earth's Radiation energy system (CERES) and European Centre for Medium-Range Weather Forecasts (ECMWF). As primary inputs of the algorithm, they are all independent from ground-based measurements. The improved algorithm was tested in three ChinaFlux forest sites, Dinghushan(DHS) subtropical evergreen broad-leaved forest site, Qianyanzhou(QYZ) subtropical man-planted forest site and Changbaishan(CBS) temperate deciduous broad-leaved coniferous mixed forest site. Validations against the in-situ measured ETobs from 2003 to 2005 showed that the EDVI-based algorithm has the capability to simulate midday ET within reasonable accuracies. In terms of the magnitude and seasonal cycle, the estimated ETcal are in very good agreement with the ETobs. The correlation coefficients(R) between ETcal and ETobs during midday vary from 0.51 to 0.80 over the study years, with the annual mean bias (relative bias) ranging from -53.02 Wm-2 (-26.46%) to 34.02 Wm-2 (+23.69%). At monthly scale, the R of monthly mean ETcal and ETobs can reach to 0.83, 0.93 and 0.82 at DHS, QYZ and CBS, with bias of +3.0%, -22.3% and -9.7%, respectively. Contamination from precipitation can partly affect the performances of this algorithm. Validation results generally become better after removing those samples in rainy days. The results indicate that this EDVI-based algorithm, driven completely by using

  1. Water quality monitoring records for estimating tap water arsenic and nitrate: a validation study.

    PubMed

    Searles Nielsen, Susan; Kuehn, Carrie M; Mueller, Beth A

    2010-01-28

    Tap water may be an important source of exposure to arsenic and nitrate. Obtaining and analyzing samples in the context of large studies of health effects can be expensive. As an alternative, studies might estimate contaminant levels in individual homes by using publicly available water quality monitoring records, either alone or in combination with geographic information systems (GIS). We examined the validity of records-based methods in Washington State, where arsenic and nitrate contamination is prevalent but generally observed at modest levels. Laboratory analysis of samples from 107 homes (median 0.6 microg/L arsenic, median 0.4 mg/L nitrate as nitrogen) served as our "gold standard." Using Spearman's rho we compared these measures to estimates obtained using only the homes' street addresses and recent and/or historical measures from publicly monitored water sources within specified distances (radii) ranging from one half mile to 10 miles. Agreement improved as distance decreased, but the proportion of homes for which we could estimate summary measures also decreased. When including all homes, agreement was 0.05-0.24 for arsenic (8 miles), and 0.31-0.33 for nitrate (6 miles). Focusing on the closest source yielded little improvement. Agreement was greatest among homes with private wells. For homes on a water system, agreement improved considerably if we included only sources serving the relevant system (rho = 0.29 for arsenic, rho = 0.60 for nitrate). Historical water quality databases show some promise for categorizing epidemiologic study participants in terms of relative tap water nitrate levels. Nonetheless, such records-based methods must be used with caution, and their use for arsenic may be limited.

  2. Validating a mass balance accounting approach to using 7Be measurements to estimate event-based erosion rates over an extended period at the catchment scale

    NASA Astrophysics Data System (ADS)

    Porto, Paolo; Walling, Des E.; Cogliandro, Vanessa; Callegari, Giovanni

    2016-07-01

    Use of the fallout radionuclides cesium-137 and excess lead-210 offers important advantages over traditional methods of quantifying erosion and soil redistribution rates. However, both radionuclides provide information on longer-term (i.e., 50-100 years) average rates of soil redistribution. Beryllium-7, with its half-life of 53 days, can provide a basis for documenting short-term soil redistribution and it has been successfully employed in several studies. However, the approach commonly used introduces several important constraints related to the timing and duration of the study period. A new approach proposed by the authors that overcomes these constraints has been successfully validated using an erosion plot experiment undertaken in southern Italy. Here, a further validation exercise undertaken in a small (1.38 ha) catchment is reported. The catchment was instrumented to measure event sediment yields and beryllium-7 measurements were employed to document the net soil loss for a series of 13 events that occurred between November 2013 and June 2015. In the absence of significant sediment storage within the catchment's ephemeral channel system and of a significant contribution from channel erosion to the measured sediment yield, the estimates of net soil loss for the individual events could be directly compared with the measured sediment yields to validate the former. The close agreement of the two sets of values is seen as successfully validating the use of beryllium-7 measurements and the new approach to obtain estimates of net soil loss for a sequence of individual events occurring over an extended period at the scale of a small catchment.

  3. Clinical validation of the General Ability Index--Estimate (GAI-E): estimating premorbid GAI.

    PubMed

    Schoenberg, Mike R; Lange, Rael T; Iverson, Grant L; Chelune, Gordon J; Scott, James G; Adams, Russell L

    2006-09-01

    The clinical utility of the General Ability Index--Estimate (GAI-E; Lange, Schoenberg, Chelune, Scott, & Adams, 2005) for estimating premorbid GAI scores was investigated using the WAIS-III standardization clinical trials sample (The Psychological Corporation, 1997). The GAI-E algorithms combine Vocabulary, Information, Matrix Reasoning, and Picture Completion subtest raw scores with demographic variables to predict GAI. Ten GAI-E algorithms were developed combining demographic variables with single subtest scaled scores and with two subtests. Estimated GAI are presented for participants diagnosed with dementia (n = 50), traumatic brain injury (n = 20), Huntington's disease (n = 15), Korsakoff's disease (n = 12), chronic alcohol abuse (n = 32), temporal lobectomy (n = 17), and schizophrenia (n = 44). In addition, a small sample of participants without dementia and diagnosed with depression (n = 32) was used as a clinical comparison group. The GAI-E algorithms provided estimates of GAI that closely approximated scores expected for a healthy adult population. The greatest differences between estimated GAI and obtained GAI were observed for the single subtest GAI-E algorithms using the Vocabulary, Information, and Matrix Reasoning subtests. Based on these data, recommendations for the use of the GAI-E algorithms are presented.

  4. Estimation of in-vivo neurotransmitter release by brain microdialysis: the issue of validity.

    PubMed

    Di Chiara, G.; Tanda, G.; Carboni, E.

    1996-11-01

    Although microdialysis is commonly understood as a method of sampling low molecular weight compounds in the extracellular compartment of tissues, this definition appears insufficient to specifically describe brain microdialysis of neurotransmitters. In fact, transmitter overflow from the brain into dialysates is critically dependent upon the composition of the perfusing Ringer. Therefore, the dialysing Ringer not only recovers the transmitter from the extracellular brain fluid but is a main determinant of its in-vivo release. Two types of brain microdialysis are distinguished: quantitative micro-dialysis and conventional microdialysis. Quantitative microdialysis provides an estimate of neurotransmitter concentrations in the extracellular fluid in contact with the probe. However, this information might poorly reflect the kinetics of neurotransmitter release in vivo. Conventional microdialysis involves perfusion at a constant rate with a transmitter-free Ringer, resulting in the formation of a steep neurotransmitter concentration gradient extending from the Ringer into the extracellular fluid. This artificial gradient might be critical for the ability of conventional microdialysis to detect and resolve phasic changes in neurotransmitter release taking place in the implanted area. On the basis of these characteristics, conventional microdialysis of neurotransmitters can be conceptualized as a model of the in-vivo release of neurotransmitters in the brain. As such, the criteria of face-validity, construct-validity and predictive-validity should be applied to select the most appropriate experimental conditions for estimating neurotransmitter release in specific brain areas in relation to behaviour.

  5. Estimating Number of People Using Calibrated Monocular Camera Based on Geometrical Analysis of Surface Area

    NASA Astrophysics Data System (ADS)

    Arai, Hiroyuki; Miyagawa, Isao; Koike, Hideki; Haseyama, Miki

    We propose a novel technique for estimating the number of people in a video sequence; it has the advantages of being stable even in crowded situations and needing no ground-truth data. By analyzing the geometrical relationships between image pixels and their intersection volumes in the real world quantitatively, a foreground image directly indicates the number of people. Because foreground detection is possible even in crowded situations, the proposed method can be applied in such situations. Moreover, it can estimate the number of people in an a priori manner, so it needs no ground-truth data unlike existing feature-based estimation techniques. Experiments show the validity of the proposed method.

  6. Validation of Regression-Based Myogenic Correction Techniques for Scalp and Source-Localized EEG

    PubMed Central

    McMenamin, Brenton W.; Shackman, Alexander J.; Maxwell, Jeffrey S.; Greischar, Lawrence L.; Davidson, Richard J.

    2008-01-01

    EEG and EEG source-estimation are susceptible to electromyographic artifacts (EMG) generated by the cranial muscles. EMG can mask genuine effects or masquerade as a legitimate effect - even in low frequencies, such as alpha (8–13Hz). Although regression-based correction has been used previously, only cursory attempts at validation exist and the utility for source-localized data is unknown. To address this, EEG was recorded from 17 participants while neurogenic and myogenic activity were factorially varied. We assessed the sensitivity and specificity of four regression-based techniques: between-subjects, between-subjects using difference-scores, within-subjects condition-wise, and within-subject epoch-wise on the scalp and in data modeled using the LORETA algorithm. Although within-subject epoch-wise showed superior performance on the scalp, no technique succeeded in the source-space. Aside from validating the novel epoch-wise methods on the scalp, we highlight methods requiring further development. PMID:19298626

  7. Estimating Pressure Reactivity Using Noninvasive Doppler-Based Systolic Flow Index.

    PubMed

    Zeiler, Frederick A; Smielewski, Peter; Donnelly, Joseph; Czosnyka, Marek; Menon, David K; Ercole, Ari

    2018-04-05

    The study objective was to derive models that estimate the pressure reactivity index (PRx) using the noninvasive transcranial Doppler (TCD) based systolic flow index (Sx_a) and mean flow index (Mx_a), both based on mean arterial pressure, in traumatic brain injury (TBI). Using a retrospective database of 347 patients with TBI with intracranial pressure and TCD time series recordings, we derived PRx, Sx_a, and Mx_a. We first derived the autocorrelative structure of PRx based on: (A) autoregressive integrative moving average (ARIMA) modeling in representative patients, and (B) within sequential linear mixed effects (LME) models with various embedded ARIMA error structures for PRx for the entire population. Finally, we performed sequential LME models with embedded PRx ARIMA modeling to find the best model for estimating PRx using Sx_a and Mx_a. Model adequacy was assessed via normally distributed residual density. Model superiority was assessed via Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), log likelihood (LL), and analysis of variance testing between models. The most appropriate ARIMA structure for PRx in this population was (2,0,2). This was applied in sequential LME modeling. Two models were superior (employing random effects in the independent variables and intercept): (A) PRx ∼ Sx_a, and (B) PRx ∼ Sx_a + Mx_a. Correlation between observed and estimated PRx with these two models was: (A) 0.794 (p < 0.0001, 95% confidence interval (CI) = 0.788-0.799), and (B) 0.814 (p < 0.0001, 95% CI = 0.809-0.819), with acceptable agreement on Bland-Altman analysis. Through using linear mixed effects modeling and accounting for the ARIMA structure of PRx, one can estimate PRx using noninvasive TCD-based indices. We have described our first attempts at such modeling and PRx estimation, establishing the strong link between two aspects of cerebral autoregulation: measures of cerebral blood flow and those of pulsatile cerebral blood

  8. The Model Human Processor and the Older Adult: Parameter Estimation and Validation Within a Mobile Phone Task

    PubMed Central

    Jastrzembski, Tiffany S.; Charness, Neil

    2009-01-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; Mage = 20) and older (N = 20; Mage = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies. PMID:18194048

  9. The Model Human Processor and the older adult: parameter estimation and validation within a mobile phone task.

    PubMed

    Jastrzembski, Tiffany S; Charness, Neil

    2007-12-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; M-sub(age) = 20) and older (N = 20; M-sub(age) = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies.

  10. Initial Validation for the Estimation of Resting-State fMRI Effective Connectivity by a Generalization of the Correlation Approach

    PubMed Central

    Xu, Nan; Spreng, R. Nathan; Doerschuk, Peter C.

    2017-01-01

    Resting-state functional MRI (rs-fMRI) is widely used to noninvasively study human brain networks. Network functional connectivity is often estimated by calculating the timeseries correlation between blood-oxygen-level dependent (BOLD) signal from different regions of interest (ROIs). However, standard correlation cannot characterize the direction of information flow between regions. In this paper, we introduce and test a new concept, prediction correlation, to estimate effective connectivity in functional brain networks from rs-fMRI. In this approach, the correlation between two BOLD signals is replaced by a correlation between one BOLD signal and a prediction of this signal via a causal system driven by another BOLD signal. Three validations are described: (1) Prediction correlation performed well on simulated data where the ground truth was known, and outperformed four other methods. (2) On simulated data designed to display the “common driver” problem, prediction correlation did not introduce false connections between non-interacting driven ROIs. (3) On experimental data, prediction correlation recovered the previously identified network organization of human brain. Prediction correlation scales well to work with hundreds of ROIs, enabling it to assess whole brain interregional connectivity at the single subject level. These results provide an initial validation that prediction correlation can capture the direction of information flow and estimate the duration of extended temporal delays in information flow between regions of interest ROIs based on BOLD signal. This approach not only maintains the high sensitivity to network connectivity provided by the correlation analysis, but also performs well in the estimation of causal information flow in the brain. PMID:28559793

  11. Initial Validation for the Estimation of Resting-State fMRI Effective Connectivity by a Generalization of the Correlation Approach.

    PubMed

    Xu, Nan; Spreng, R Nathan; Doerschuk, Peter C

    2017-01-01

    Resting-state functional MRI (rs-fMRI) is widely used to noninvasively study human brain networks. Network functional connectivity is often estimated by calculating the timeseries correlation between blood-oxygen-level dependent (BOLD) signal from different regions of interest (ROIs). However, standard correlation cannot characterize the direction of information flow between regions. In this paper, we introduce and test a new concept, prediction correlation, to estimate effective connectivity in functional brain networks from rs-fMRI. In this approach, the correlation between two BOLD signals is replaced by a correlation between one BOLD signal and a prediction of this signal via a causal system driven by another BOLD signal. Three validations are described: (1) Prediction correlation performed well on simulated data where the ground truth was known, and outperformed four other methods. (2) On simulated data designed to display the "common driver" problem, prediction correlation did not introduce false connections between non-interacting driven ROIs. (3) On experimental data, prediction correlation recovered the previously identified network organization of human brain. Prediction correlation scales well to work with hundreds of ROIs, enabling it to assess whole brain interregional connectivity at the single subject level. These results provide an initial validation that prediction correlation can capture the direction of information flow and estimate the duration of extended temporal delays in information flow between regions of interest ROIs based on BOLD signal. This approach not only maintains the high sensitivity to network connectivity provided by the correlation analysis, but also performs well in the estimation of causal information flow in the brain.

  12. Further assessment of a method to estimate reliability and validity of qualitative research findings.

    PubMed

    Hinds, P S; Scandrett-Hibden, S; McAulay, L S

    1990-04-01

    The reliability and validity of qualitative research findings are viewed with scepticism by some scientists. This scepticism is derived from the belief that qualitative researchers give insufficient attention to estimating reliability and validity of data, and the differences between quantitative and qualitative methods in assessing data. The danger of this scepticism is that relevant and applicable research findings will not be used. Our purpose is to describe an evaluative strategy for use with qualitative data, a strategy that is a synthesis of quantitative and qualitative assessment methods. Results of the strategy and factors that influence its use are also described.

  13. Software Development Cost Estimation Executive Summary

    NASA Technical Reports Server (NTRS)

    Hihn, Jairus M.; Menzies, Tim

    2006-01-01

    Identify simple fully validated cost models that provide estimation uncertainty with cost estimate. Based on COCOMO variable set. Use machine learning techniques to determine: a) Minimum number of cost drivers required for NASA domain based cost models; b) Minimum number of data records required and c) Estimation Uncertainty. Build a repository of software cost estimation information. Coordinating tool development and data collection with: a) Tasks funded by PA&E Cost Analysis; b) IV&V Effort Estimation Task and c) NASA SEPG activities.

  14. Relative validity of a web-based food frequency questionnaire for Danish adolescents.

    PubMed

    Bjerregaard, Anne A; Halldorsson, Thorhallur I; Kampmann, Freja B; Olsen, Sjurdur F; Tetens, Inge

    2018-01-12

    With increased focus on dietary intake among youth and risk of diseases later in life, it is of importance, prior to assessing diet-disease relationships, to examine the validity of the dietary assessment tool. This study's objective was to evaluate the relative validity of a self-administered web-based FFQ among Danish children aged 12 to 15 years. From a nested sub-cohort within the Danish National Birth Cohort, 124 adolescents participated. Four weeks after completion of the FFQ, adolescents were invited to complete three telephone-based 24HRs; administered 4 weeks apart. Mean or median intakes of nutrients and food groups estimated from the FFQ were compared with the mean of 3x24HRs. To assess the level of ranking we calculated the proportion of correctly classified into the same quartile, and the proportion of misclassified (into the opposite quartile). Spearman's correlation coefficients and de-attenuated coefficients were calculated to assess agreement between the FFQ and 24HRs. The mean percentage of all food groups, for adolescents classified into the same and opposite quartile was 35 and 7.5%, respectively. Mean Spearman's correlation was 0.28 for food groups and 0.35 for nutrients, respectively. Adjustment for energy and within-person variation in the 24HRs had little effect on the magnitude of the correlations for food groups and nutrients. We found overestimation by the FFQ compared with the 24HRs for fish, fruits, vegetables, oils and dressing and underestimation by the FFQ for meat/poultry and sweets. Median intake of beverages, dairy, bread, cereals, the mean total energy and carbohydrate intake did not differ significantly between the two methods. The relative validity of the FFQ compared with the 3x24HRs showed that the ranking ability differed across food groups and nutrients with best ranking for estimated intake of dairy, fruits, and oils and dressing. Larger variation was observed for fish, sweets and vegetables. For nutrients, the ranking

  15. Competency-Based Training and Simulation: Making a "Valid" Argument.

    PubMed

    Noureldin, Yasser A; Lee, Jason Y; McDougall, Elspeth M; Sweet, Robert M

    2018-02-01

    The use of simulation as an assessment tool is much more controversial than is its utility as an educational tool. However, without valid simulation-based assessment tools, the ability to objectively assess technical skill competencies in a competency-based medical education framework will remain challenging. The current literature in urologic simulation-based training and assessment uses a definition and framework of validity that is now outdated. This is probably due to the absence of awareness rather than an absence of comprehension. The following review article provides the urologic community an updated taxonomy on validity theory as it relates to simulation-based training and assessments and translates our simulation literature to date into this framework. While the old taxonomy considered validity as distinct subcategories and focused on the simulator itself, the modern taxonomy, for which we translate the literature evidence, considers validity as a unitary construct with a focus on interpretation of simulator data/scores.

  16. A Comparison of Energy Expenditure Estimation of Several Physical Activity Monitors

    PubMed Central

    Dannecker, Kathryn L.; Sazonova, Nadezhda A.; Melanson, Edward L.; Sazonov, Edward S.; Browning, Raymond C.

    2013-01-01

    Accurately and precisely estimating free-living energy expenditure (EE) is important for monitoring energy balance and quantifying physical activity. Recently, single and multi-sensor devices have been developed that can classify physical activities, potentially resulting in improved estimates of EE. PURPOSE To determine the validity of EE estimation of a footwear-based physical activity monitor and to compare this validity against a variety of research and consumer physical activity monitors. METHODS Nineteen healthy young adults (10 male, 9 female), completed a four-hour stay in a room calorimeter. Participants wore a footwear-based physical activity monitor, as well as Actical, Actigraph, IDEEA, DirectLife and Fitbit devices. Each individual performed a series of postures/activities. We developed models to estimate EE from the footwear-based device, and we used the manufacturer's software to estimate EE for all other devices. RESULTS Estimated EE using the shoe-based device was not significantly different than measured EE (476(20) vs. 478(18) kcal) (Mean (SE)), respectively, and had a root mean square error (RMSE) of (29.6 kcal (6.2%)). The IDEEA and DirectLlife estimates of EE were not significantly different than the measured EE but the Actigraph and Fitbit devices significantly underestimated EE. Root mean square errors were 93.5 (19%), 62.1 kcal (14%), 88.2 kcal (18%), 136.6 kcal (27%), 130.1 kcal (26%), and 143.2 kcal (28%) for Actical, DirectLife, IDEEA, Actigraph and Fitbit respectively. CONCLUSIONS The shoe based physical activity monitor provides a valid estimate of EE while the other physical activity monitors tested have a wide range of validity when estimating EE. Our results also demonstrate that estimating EE based on classification of physical activities can be more accurate and precise than estimating EE based on total physical activity. PMID:23669877

  17. A comparison of energy expenditure estimation of several physical activity monitors.

    PubMed

    Dannecker, Kathryn L; Sazonova, Nadezhda A; Melanson, Edward L; Sazonov, Edward S; Browning, Raymond C

    2013-11-01

    Accurately and precisely estimating free-living energy expenditure (EE) is important for monitoring energy balance and quantifying physical activity. Recently, single and multisensor devices have been developed that can classify physical activities, potentially resulting in improved estimates of EE. This study aimed to determine the validity of EE estimation of a footwear-based physical activity monitor and to compare this validity against a variety of research and consumer physical activity monitors. Nineteen healthy young adults (10 men, 9 women) completed a 4-h stay in a room calorimeter. Participants wore a footwear-based physical activity monitor as well as Actical, ActiGraph, IDEEA, DirectLife, and Fitbit devices. Each individual performed a series of postures/activities. We developed models to estimate EE from the footwear-based device, and we used the manufacturer's software to estimate EE for all other devices. Estimated EE using the shoe-based device was not significantly different than measured EE (mean ± SE; 476 ± 20 vs 478 ± 18 kcal, respectively) and had a root-mean-square error of 29.6 kcal (6.2%). The IDEEA and the DirectLlife estimates of EE were not significantly different than the measured EE, but the ActiGraph and the Fitbit devices significantly underestimated EE. Root-mean-square errors were 93.5 (19%), 62.1 kcal (14%), 88.2 kcal (18%), 136.6 kcal (27%), 130.1 kcal (26%), and 143.2 kcal (28%) for Actical, DirectLife, IDEEA, ActiGraph, and Fitbit, respectively. The shoe-based physical activity monitor provides a valid estimate of EE, whereas the other physical activity monitors tested have a wide range of validity when estimating EE. Our results also demonstrate that estimating EE based on classification of physical activities can be more accurate and precise than estimating EE based on total physical activity.

  18. On the validity of the incremental approach to estimate the impact of cities on air quality

    NASA Astrophysics Data System (ADS)

    Thunis, Philippe

    2018-01-01

    The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.

  19. Development and validation of a food frequency questionnaire to estimate the intake of genistein in Malaysia.

    PubMed

    Fernandez, Anne R; Omar, Siti Zawiah; Husain, Ruby

    2013-11-01

    To develop and validate a food frequency questionnaire (FFQ) to estimate the genistein intake in a Malaysian population of pregnant women. A single 24-h dietary recall was obtained from 40 male and female volunteers. A FFQ of commonly consumed genistein-rich foods was developed from these recalls, and a database of the genistein content of foods found in Malaysia was set up. The FFQ was validated against 7-d food diary (FD) kept by 46 pregnant women and against non-fasting serum samples obtained from 64 pregnant women. Reproducibility was assessed by comparing the responses on two FFQs administered approximately 1 month apart. The Pearson correlation coefficient between FFQ1 and FD was 0.724 and that between FFQ2 and FD was 0.807. Classification into the same or adjacent quintiles was 78% for FFQ1 versus FD and 88% for FFQ2 versus FD. A significant dose -- response relation was found between FFQ-estimated genistein intake and serum levels. The FFQ developed is a reliable, valid tool for categorising people by level of genistein intake.

  20. Validation of snow line estimations using MODIS images for the Elqui River basin, Chile

    NASA Astrophysics Data System (ADS)

    Vasquez, Nicolas; Lagos, Miguel; Vargas, Ximena

    2015-04-01

    Precipitation events in North-Central Chile are very important because the region has a Mediterranean climate, with a humid period, and an extensive dry one. Separation between solid and liquid precipitation (snow line) in each event is important information that allow to estimate 1) the available snow covered area for snow-melt forecasting, during the dry season (the only resource of water in this period) and 2) the area affected by rain for flood modelling and infrastructure design. In this work, snow line was estimated with a meteorological approach, considering precipitation, temperature, relative humidity and dew point information at a daily scale from 2004 to 2010 and hourly from 2010 to 2013. In both periods, different meteorological stations are considered due to the implementation of new stations in the study area, covering from 1000 to 3000 (m.a.s.l) approximately, with snow and rain meteorological stations. The methodology exposed in this research is based in vertical variation of dew point and temperature due to more stability variations compared to vertical relative humidity behavior. The results calculated from meteorological data are compared with MODIS images, considering three criteria: (1) the median altitude of the minimum specific fractional snow covered area (FSCA), (2) the mean elevation of pixels with a FSCA<10% and (3) the snow line estimation via snow covered area and hypsometric curve. Historically in Chile, snow line has been studied considering few specific precipitation and temperature observations, or estimations of zero isotherms from upper air soundings. A comparison between these estimations and the results validated through MOD10A1/MYD10A1 products was made in order to identify tendencies and/or variations of the snow line at an annually scale.

  1. Side-information-dependent correlation channel estimation in hash-based distributed video coding.

    PubMed

    Deligiannis, Nikos; Barbarien, Joeri; Jacobs, Marc; Munteanu, Adrian; Skodras, Athanassios; Schelkens, Peter

    2012-04-01

    In the context of low-cost video encoding, distributed video coding (DVC) has recently emerged as a potential candidate for uplink-oriented applications. This paper builds on a concept of correlation channel (CC) modeling, which expresses the correlation noise as being statistically dependent on the side information (SI). Compared with classical side-information-independent (SII) noise modeling adopted in current DVC solutions, it is theoretically proven that side-information-dependent (SID) modeling improves the Wyner-Ziv coding performance. Anchored in this finding, this paper proposes a novel algorithm for online estimation of the SID CC parameters based on already decoded information. The proposed algorithm enables bit-plane-by-bit-plane successive refinement of the channel estimation leading to progressively improved accuracy. Additionally, the proposed algorithm is included in a novel DVC architecture that employs a competitive hash-based motion estimation technique to generate high-quality SI at the decoder. Experimental results corroborate our theoretical gains and validate the accuracy of the channel estimation algorithm. The performance assessment of the proposed architecture shows remarkable and consistent coding gains over a germane group of state-of-the-art distributed and standard video codecs, even under strenuous conditions, i.e., large groups of pictures and highly irregular motion content.

  2. Comparisons of Instantaneous TRMM Ground Validation and Satellite Rain Rate Estimates at Different Spatial Scales

    NASA Technical Reports Server (NTRS)

    Wolff, David B.; Fisher, Brad L.

    2007-01-01

    This study provides a comprehensive inter-comparison of instantaneous rain estimates from the two rain sensors aboard the TRMM satellite with ground data from thee designated Ground Validation Sites: Kwajalein Atoll, Melbourne, Florida and Houston, Texas. The satellite rain retrievals utilize rain observations collected by the TRMM microwave imager (TMI) and the Precipitation Radar (PR) aboard the TRMM satellite. Three standard instantaneous rain products are the generated from the rain information retrieved from the satellite using the TMI, PR and Combined (COM) rain algorithms. The validation data set used in this study was obtained from instantaneous rain rates inferred from ground radars at each GV site. The first comparison used 0.5(sup 0) x 0.5(sup 0) gridded data obtained from the TRMM 3668 product, and similarly gridded GV data obtained from ground-based radars. The comparisons were made at the same spatial and temporal scales in order to eliminate sampling biases in our comparisons. An additional comparison was made by averaging rain rates for the PR, COM and GV estimates within each TMI footprint (approx. 150 square kilometers). For this analysis, unconditional mean rain rates from PR, COM and GV estimates were calculated within each TMI footprint that was observed within 100 km from the respective GV site (and also observed by the PR). This analysis used all the available matching data from the period 1999-2004, representing a sample size of over 50,000 footprints for each site. In the first analysis our results showed that all of the respective rain rate estimates agree well, with some exceptions. The more salient differences were associated with heavy rain events in which one or more of the algorithms failed to properly retrieve these extreme events. Also, it appears that there is a preferred mode of precipitation for TMI rain rates at or near 2 mm per hour over the ocean. This mode was noted over ocean areas of Melbourne, Florida and Kwajalein

  3. Validity of a self-administered food frequency questionnaire in the estimation of heterocyclic aromatic amines.

    PubMed

    Iwasaki, Motoki; Mukai, Tomomi; Takachi, Ribeka; Ishihara, Junko; Totsuka, Yukari; Tsugane, Shoichiro

    2014-08-01

    Clarification of the putative etiologic role of heterocyclic aromatic amines (HAAs) in the development of cancer requires a validated assessment tool for dietary HAAs. This study primarily aimed to evaluate the validity of a food frequency questionnaire (FFQ) in estimating HAA intake, using 2-amino-1-methyl-6-phenylimidazo[4,5-b]pyridine (PhIP) level in human hair as the reference method. We first updated analytical methods of PhIP using liquid chromatography-electrospray ionization/tandem mass spectrometry (LC-ESI/MS/MS) and measured 44 fur samples from nine rats from a feeding study as part-verification of the quantitative performance of LC-ESI/MS/MS. We next measured PhIP level in human hair samples from a validation study of the FFQ (n = 65). HAA intake from the FFQ was estimated using information on intake from six fish items and seven meat items and data on HAA content in each food item. Correlation coefficients between PhIP level in human hair and HAA intake from the FFQ were calculated. The animal feeding study of PhIP found a significant dose-response relationship between dosage and PhIP in rat fur. Mean level was 53.8 pg/g hair among subjects with values over the limit of detection (LOD) (n = 57). We found significant positive correlation coefficients between PhIP in human hair and HAA intake from the FFQ, with Spearman rank correlation coefficients of 0.35 for all subjects, 0.21 for subjects with over LOD values, and 0.34 for subjects with over limit of quantification. Findings from the validation study suggest that the FFQ is reasonably valid for the assessment of HAA intake.

  4. Validation of Diagnostic Groups Based on Health Care Utilization Data Should Adjust for Sampling Strategy.

    PubMed

    Cadieux, Geneviève; Tamblyn, Robyn; Buckeridge, David L; Dendukuri, Nandini

    2017-08-01

    Valid measurement of outcomes such as disease prevalence using health care utilization data is fundamental to the implementation of a "learning health system." Definitions of such outcomes can be complex, based on multiple diagnostic codes. The literature on validating such data demonstrates a lack of awareness of the need for a stratified sampling design and corresponding statistical methods. We propose a method for validating the measurement of diagnostic groups that have: (1) different prevalences of diagnostic codes within the group; and (2) low prevalence. We describe an estimation method whereby: (1) low-prevalence diagnostic codes are oversampled, and the positive predictive value (PPV) of the diagnostic group is estimated as a weighted average of the PPV of each diagnostic code; and (2) claims that fall within a low-prevalence diagnostic group are oversampled relative to claims that are not, and bias-adjusted estimators of sensitivity and specificity are generated. We illustrate our proposed method using an example from population health surveillance in which diagnostic groups are applied to physician claims to identify cases of acute respiratory illness. Failure to account for the prevalence of each diagnostic code within a diagnostic group leads to the underestimation of the PPV, because low-prevalence diagnostic codes are more likely to be false positives. Failure to adjust for oversampling of claims that fall within the low-prevalence diagnostic group relative to those that do not leads to the overestimation of sensitivity and underestimation of specificity.

  5. Methodology for testing and validating knowledge bases

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  6. A home calendar and recall method of last menstrual period for estimating gestational age in rural Bangladesh: a validation study.

    PubMed

    Gernand, Alison D; Paul, Rina Rani; Ullah, Barkat; Taher, Muhammad A; Witter, Frank R; Wu, Lee; Labrique, Alain B; West, Keith P; Christian, Parul

    2016-10-21

    The best method of gestational age assessment is by ultrasound in the first trimester; however, this method is impractical in large field trials in rural areas. Our objective was to assess the validity of gestational age estimated from prospectively collected date of last menstrual period (LMP) using crown-rump length (CRL) measured in early pregnancy by ultrasound. As part of a large, cluster-randomized, controlled trial in rural Bangladesh, we collected dates of LMP by recall and as marked on a calendar every 5 weeks in women likely to become pregnant. Among those with a urine-test confirmed pregnancy, a subset with gestational age of <15 weeks (n = 353) were enrolled for ultrasound follow-up to measure CRL. We compared interview-assessed LMP with CRL gestational age estimates and classification of preterm, term, and post-term births. LMP-based gestational age was higher than CRL by a mean (SD) of 2.8 (10.7) days; differences varied by maternal education and preterm birth (P < 0.05). Lin's concordance correlation coefficient was good at ultrasound [0.63 (95 % CI 0.56, 0.69)] and at birth [0.77 (95 % CI 0.73, 0.81)]. Validity of classifying preterm birth was high but post-term was lower, with specificity of 96 and 89 % and sensitivity of 86 and 67 %, respectively. Results were similar by parity. Prospectively collected LMP provided a valid estimate of gestational age and preterm birth in a rural, low-income setting and may be a suitable alternative to ultrasound in programmatic settings and large field trials. ClinicalTrials.gov NCT00860470.

  7. Observations-based GPP estimates

    NASA Astrophysics Data System (ADS)

    Joiner, J.; Yoshida, Y.; Jung, M.; Tucker, C. J.; Pinzon, J. E.

    2017-12-01

    We have developed global estimates of gross primary production based on a relatively simple satellite observations-based approach using reflectance data from the MODIS instruments in the form of vegetation indices that provide information about photosynthetic capacity at both high temporal and spatial resolution and combined with information from chlorophyll solar-induced fluorescence from the Global Ozone Monitoring Experiment-2 instrument that is noisier and available only at lower temporal and spatial scales. We compare our gross primary production estimates with those from eddy covariance flux towers and show that they are competitive with more complicated extrapolated machine learning gross primary production products. Our results provide insight into the amount of variance in gross primary production that can be explained with satellite observations data and also show how processing of the satellite reflectance data is key to using it for accurate GPP estimates.

  8. Is self-reporting workplace activity worthwhile? Validity and reliability of occupational sitting and physical activity questionnaire in desk-based workers.

    PubMed

    Pedersen, Scott J; Kitic, Cecilia M; Bird, Marie-Louise; Mainsbridge, Casey P; Cooley, P Dean

    2016-08-19

    With the advent of workplace health and wellbeing programs designed to address prolonged occupational sitting, tools to measure behaviour change within this environment should derive from empirical evidence. In this study we measured aspects of validity and reliability for the Occupational Sitting and Physical Activity Questionnaire that asks employees to recount the percentage of work time they spend in the seated, standing, and walking postures during a typical workday. Three separate cohort samples (N = 236) were drawn from a population of government desk-based employees across several departmental agencies. These volunteers were part of a larger state-wide intervention study. Workplace sitting and physical activity behaviour was measured both subjectively against the International Physical Activity Questionnaire, and objectively against ActivPal accelerometers before the intervention began. Criterion validity and concurrent validity for each of the three posture categories were assessed using Spearman's rank correlation coefficients, and a bias comparison with 95 % limits of agreement. Test-retest reliability of the survey was reported with intraclass correlation coefficients. Criterion validity for this survey was strong for sitting and standing estimates, but weak for walking. Participants significantly overestimated the amount of walking they did at work. Concurrent validity was moderate for sitting and standing, but low for walking. Test-retest reliability of this survey proved to be questionable for our sample. Based on our findings we must caution occupational health and safety professionals about the use of employee self-report data to estimate workplace physical activity. While the survey produced accurate measurements for time spent sitting at work it was more difficult for employees to estimate their workplace physical activity.

  9. Proposal and validation of a new model to estimate survival for hepatocellular carcinoma patients.

    PubMed

    Liu, Po-Hong; Hsu, Chia-Yang; Hsia, Cheng-Yuan; Lee, Yun-Hsuan; Huang, Yi-Hsiang; Su, Chien-Wei; Lee, Fa-Yauh; Lin, Han-Chieh; Huo, Teh-Ia

    2016-08-01

    The survival of hepatocellular carcinoma (HCC) patients is heterogeneous. We aim to develop and validate a simple prognostic model to estimate survival for HCC patients (MESH score). A total of 3182 patients were randomised into derivation and validation cohort. Multivariate analysis was used to identify independent predictors of survival in the derivation cohort. The validation cohort was employed to examine the prognostic capabilities. The MESH score allocated 1 point for each of the following parameters: large tumour (beyond Milan criteria), presence of vascular invasion or metastasis, Child-Turcotte-Pugh score ≥6, performance status ≥2, serum alpha-fetoprotein level ≥20 ng/ml, and serum alkaline phosphatase ≥200 IU/L, with a maximal of 6 points. In the validation cohort, significant survival differences were found across all MESH scores from 0 to 6 (all p < 0.01). The MESH system was associated with the highest homogeneity and lowest corrected Akaike information criterion compared with Barcelona Clínic Liver Cancer, Hong Kong Liver Cancer (HKLC), Cancer of the Liver Italian Program, Taipei Integrated Scoring and model to estimate survival in ambulatory HCC Patients systems. The prognostic accuracy of the MESH scores remained constant in patients with hepatitis B- or hepatitis C-related HCC. The MESH score can also discriminate survival for patients from early to advanced stages of HCC. This newly proposed simple and accurate survival model provides enhanced prognostic accuracy for HCC. The MESH system is a useful supplement to the BCLC and HKLC classification schemes in refining treatment strategies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Comparison of 3 estimation methods of mycophenolic acid AUC based on a limited sampling strategy in renal transplant patients.

    PubMed

    Hulin, Anne; Blanchet, Benoît; Audard, Vincent; Barau, Caroline; Furlan, Valérie; Durrbach, Antoine; Taïeb, Fabrice; Lang, Philippe; Grimbert, Philippe; Tod, Michel

    2009-04-01

    A significant relationship between mycophenolic acid (MPA) area under the plasma concentration-time curve (AUC) and the risk for rejection has been reported. Based on 3 concentration measurements, 3 approaches have been proposed for the estimation of MPA AUC, involving either a multilinear regression approach model (MLRA) or a Bayesian estimation using either gamma absorption or zero-order absorption population models. The aim of the study was to compare the 3 approaches for the estimation of MPA AUC in 150 renal transplant patients treated with mycophenolate mofetil and tacrolimus. The population parameters were determined in 77 patients (learning study). The AUC estimation methods were compared in the learning population and in 73 patients from another center (validation study). In the latter study, the reference AUCs were estimated by the trapezoidal rule on 8 measurements. MPA concentrations were measured by liquid chromatography. The gamma absorption model gave the best fit. In the learning study, the AUCs estimated by both Bayesian methods were very similar, whereas the multilinear approach was highly correlated but yielded estimates about 20% lower than Bayesian methods. This resulted in dosing recommendations differing by 250 mg/12 h or more in 27% of cases. In the validation study, AUC estimates based on the Bayesian method with gamma absorption model and multilinear regression approach model were, respectively, 12% higher and 7% lower than the reference values. To conclude, the bicompartmental model with gamma absorption rate gave the best fit. The 3 AUC estimation methods are highly correlated but not concordant. For a given patient, the same estimation method should always be used.

  11. Output-only modal parameter estimator of linear time-varying structural systems based on vector TAR model and least squares support vector machine

    NASA Astrophysics Data System (ADS)

    Zhou, Si-Da; Ma, Yuan-Chen; Liu, Li; Kang, Jie; Ma, Zhi-Sai; Yu, Lei

    2018-01-01

    Identification of time-varying modal parameters contributes to the structural health monitoring, fault detection, vibration control, etc. of the operational time-varying structural systems. However, it is a challenging task because there is not more information for the identification of the time-varying systems than that of the time-invariant systems. This paper presents a vector time-dependent autoregressive model and least squares support vector machine based modal parameter estimator for linear time-varying structural systems in case of output-only measurements. To reduce the computational cost, a Wendland's compactly supported radial basis function is used to achieve the sparsity of the Gram matrix. A Gamma-test-based non-parametric approach of selecting the regularization factor is adapted for the proposed estimator to replace the time-consuming n-fold cross validation. A series of numerical examples have illustrated the advantages of the proposed modal parameter estimator on the suppression of the overestimate and the short data. A laboratory experiment has further validated the proposed estimator.

  12. Building validation tools for knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Stachowitz, R. A.; Chang, C. L.; Stock, T. S.; Combs, J. B.

    1987-01-01

    The Expert Systems Validation Associate (EVA), a validation system under development at the Lockheed Artificial Intelligence Center for more than a year, provides a wide range of validation tools to check the correctness, consistency and completeness of a knowledge-based system. A declarative meta-language (higher-order language), is used to create a generic version of EVA to validate applications written in arbitrary expert system shells. The architecture and functionality of EVA are presented. The functionality includes Structure Check, Logic Check, Extended Structure Check (using semantic information), Extended Logic Check, Semantic Check, Omission Check, Rule Refinement, Control Check, Test Case Generation, Error Localization, and Behavior Verification.

  13. Estimating and validating harvesting system production through computer simulation

    Treesearch

    John E. Baumgras; Curt C. Hassler; Chris B. LeDoux

    1993-01-01

    A Ground Based Harvesting System Simulation model (GB-SIM) has been developed to estimate stump-to-truck production rates and multiproduct yields for conventional ground-based timber harvesting systems in Appalachian hardwood stands. Simulation results reflect inputs that define harvest site and timber stand attributes, wood utilization options, and key attributes of...

  14. Validity of the estimates of oral cholera vaccine effectiveness derived from the test-negative design.

    PubMed

    Ali, Mohammad; You, Young Ae; Sur, Dipika; Kanungo, Suman; Kim, Deok Ryun; Deen, Jacqueline; Lopez, Anna Lena; Wierzba, Thomas F; Bhattacharya, Sujit K; Clemens, John D

    2016-01-20

    The test-negative design (TND) has emerged as a simple method for evaluating vaccine effectiveness (VE). Its utility for evaluating oral cholera vaccine (OCV) effectiveness is unknown. We examined this method's validity in assessing OCV effectiveness by comparing the results of TND analyses with those of conventional cohort analyses. Randomized controlled trials of OCV were conducted in Matlab (Bangladesh) and Kolkata (India), and an observational cohort design was used in Zanzibar (Tanzania). For all three studies, VE using the TND was estimated from the odds ratio (OR) relating vaccination status to fecal test status (Vibrio cholerae O1 positive or negative) among diarrheal patients enrolled during surveillance (VE= (1-OR)×100%). In cohort analyses of these studies, we employed the Cox proportional hazard model for estimating VE (=1-hazard ratio)×100%). OCV effectiveness estimates obtained using the TND (Matlab: 51%, 95% CI:37-62%; Kolkata: 67%, 95% CI:57-75%) were similar to the cohort analyses of these RCTs (Matlab: 52%, 95% CI:43-60% and Kolkata: 66%, 95% CI:55-74%). The TND VE estimate for the Zanzibar data was 94% (95% CI:84-98%) compared with 82% (95% CI:58-93%) in the cohort analysis. After adjusting for residual confounding in the cohort analysis of the Zanzibar study, using a bias indicator condition, we observed almost no difference in the two estimates. Our findings suggest that the TND is a valid approach for evaluating OCV effectiveness in routine vaccination programs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. An atlas-based organ dose estimator for tomosynthesis and radiography

    NASA Astrophysics Data System (ADS)

    Hoye, Jocelyn; Zhang, Yakun; Agasthya, Greeshma; Sturgeon, Greg; Kapadia, Anuj; Segars, W. Paul; Samei, Ehsan

    2017-03-01

    The purpose of this study was to provide patient-specific organ dose estimation based on an atlas of human models for twenty tomosynthesis and radiography protocols. The study utilized a library of 54 adult computational phantoms (age: 18-78 years, weight 52-117 kg) and a validated Monte-Carlo simulation (PENELOPE) of a tomosynthesis and radiography system to estimate organ dose. Positioning of patient anatomy was based on radiographic positioning handbooks. The field of view for each exam was calculated to include relevant organs per protocol. Through simulations, the energy deposited in each organ was binned to estimate normalized organ doses into a reference database. The database can be used as the basis to devise a dose calculator to predict patient-specific organ dose values based on kVp, mAs, exposure in air, and patient habitus for a given protocol. As an example of the utility of this tool, dose to an organ was studied as a function of average patient thickness in the field of view for a given exam and as a function of Body Mass Index (BMI). For tomosynthesis, organ doses can also be studied as a function of x-ray tube position. This work developed comprehensive information for organ dose dependencies across tomosynthesis and radiography. There was a general exponential decrease dependency with increasing patient size that is highly protocol dependent. There was a wide range of variability in organ dose across the patient population, which needs to be incorporated in the metrology of organ dose.

  16. Web-based questionnaires to assess perinatal outcome proved to be valid.

    PubMed

    van Gelder, Marleen M H J; Vorstenbosch, Saskia; Derks, Lineke; Te Winkel, Bernke; van Puijenbroek, Eugène P; Roeleveld, Nel

    2017-10-01

    The objective of this study was to validate a Web-based questionnaire completed by the mother to assess perinatal outcome used in a prospective cohort study. For 882 women with an estimated date of delivery between February 2012 and February 2015 who participated in the PRegnancy and Infant DEvelopment (PRIDE) Study, we compared data on pregnancy outcome, including mode of delivery, plurality, gestational age, birth weight and length, head circumference, birth defects, and infant sex, from Web-based questionnaires administered to the mothers 2 months after delivery with data from obstetric records. For continuous variables, we calculated intraclass correlation coefficients (ICCs) with 95% confidence intervals (CIs), whereas sensitivity and specificity were determined for categorical variables. We observed only very small differences between the two methods of data collection for gestational age (ICC, 0.91; 95% CI, 0.90-0.92), birth weight (ICC, 0.96; 95% CI, 0.95-0.96), birth length (ICC, 0.90; 95% CI, 0.87-0.92), and head circumference (ICC, 0.88; 95% CI, 0.80-0.93). Agreement between the Web-based questionnaire and obstetric records was high as well, with sensitivity ranging between 0.86 (termination of pregnancy) and 1.00 (four outcomes) and specificity between 0.96 (term birth) and 1.00 (nine outcomes). Our study provides evidence that Web-based questionnaires could be considered as a valid complementary or alternative method of data collection. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Validation of equations and proposed reference values to estimate fat mass in Chilean university students.

    PubMed

    Gómez Campos, Rossana; Pacheco Carrillo, Jaime; Almonacid Fierro, Alejandro; Urra Albornoz, Camilo; Cossío-Bolaños, Marco

    2018-03-01

    (i) To propose regression equations based on anthropometric measures to estimate fat mass (FM) using dual energy X-ray absorptiometry (DXA) as reference method, and (ii)to establish population reference standards for equation-derived FM. A cross-sectional study on 6,713 university students (3,354 males and 3,359 females) from Chile aged 17.0 to 27.0years. Anthropometric measures (weight, height, waist circumference) were taken in all participants. Whole body DXA was performed in 683 subjects. A total of 478 subjects were selected to develop regression equations, and 205 for their cross-validation. Data from 6,030 participants were used to develop reference standards for FM. Equations were generated using stepwise multiple regression analysis. Percentiles were developed using the LMS method. Equations for men were: (i) FM=-35,997.486 +232.285 *Weight +432.216 *CC (R 2 =0.73, SEE=4.1); (ii)FM=-37,671.303 +309.539 *Weight +66,028.109 *ICE (R2=0.76, SEE=3.8), while equations for women were: (iii)FM=-13,216.917 +461,302 *Weight+91.898 *CC (R 2 =0.70, SEE=4.6), and (iv) FM=-14,144.220 +464.061 *Weight +16,189.297 *ICE (R 2 =0.70, SEE=4.6). Percentiles proposed included p10, p50, p85, and p95. The developed equations provide valid and accurate estimation of FM in both sexes. The values obtained using the equations may be analyzed from percentiles that allow for categorizing body fat levels by age and sex. Copyright © 2017 SEEN y SED. Publicado por Elsevier España, S.L.U. All rights reserved.

  18. Validation of vision-based range estimation algorithms using helicopter flight data

    NASA Technical Reports Server (NTRS)

    Smith, Phillip N.

    1993-01-01

    The objective of this research was to demonstrate the effectiveness of an optic flow method for passive range estimation using a Kalman-filter implementation with helicopter flight data. This paper is divided into the following areas: (1) ranging algorithm; (2) flight experiment; (3) analysis methodology; (4) results; and (5) concluding remarks. The discussion is presented in viewgraph format.

  19. Estimation of tool wear during CNC milling using neural network-based sensor fusion

    NASA Astrophysics Data System (ADS)

    Ghosh, N.; Ravi, Y. B.; Patra, A.; Mukhopadhyay, S.; Paul, S.; Mohanty, A. R.; Chattopadhyay, A. B.

    2007-01-01

    Cutting tool wear degrades the product quality in manufacturing processes. Monitoring tool wear value online is therefore needed to prevent degradation in machining quality. Unfortunately there is no direct way of measuring the tool wear online. Therefore one has to adopt an indirect method wherein the tool wear is estimated from several sensors measuring related process variables. In this work, a neural network-based sensor fusion model has been developed for tool condition monitoring (TCM). Features extracted from a number of machining zone signals, namely cutting forces, spindle vibration, spindle current, and sound pressure level have been fused to estimate the average flank wear of the main cutting edge. Novel strategies such as, signal level segmentation for temporal registration, feature space filtering, outlier removal, and estimation space filtering have been proposed. The proposed approach has been validated by both laboratory and industrial implementations.

  20. MODIS Observation of Aerosols over Southern Africa During SAFARI 2000: Data, Validation, and Estimation of Aerosol Radiative Forcing

    NASA Technical Reports Server (NTRS)

    Ichoku, Charles; Kaufman, Yoram; Remer, Lorraine; Chu, D. Allen; Mattoo, Shana; Tanre, Didier; Levy, Robert; Li, Rong-Rong; Kleidman, Richard; Lau, William K. M. (Technical Monitor)

    2001-01-01

    Aerosol properties, including optical thickness and size parameters, are retrieved operationally from the MODIS sensor onboard the Terra satellite launched on 18 December 1999. The predominant aerosol type over the Southern African region is smoke, which is generated from biomass burning on land and transported over the southern Atlantic Ocean. The SAFARI-2000 period experienced smoke aerosol emissions from the regular biomass burning activities as well as from the prescribed burns administered on the auspices of the experiment. The MODIS Aerosol Science Team (MAST) formulates and implements strategies for the retrieval of aerosol products from MODIS, as well as for validating and analyzing them in order to estimate aerosol effects in the radiative forcing of climate as accurately as possible. These activities are carried out not only from a global perspective, but also with a focus on specific regions identified as having interesting characteristics, such as the biomass burning phenomenon in southern Africa and the associated smoke aerosol, particulate, and trace gas emissions. Indeed, the SAFARI-2000 aerosol measurements from the ground and from aircraft, along with MODIS, provide excellent data sources for a more intensive validation and a closer study of the aerosol characteristics over Southern Africa. The SAFARI-2000 ground-based measurements of aerosol optical thickness (AOT) from both the automatic Aerosol Robotic Network (AERONET) and handheld Sun photometers have been used to validate MODIS retrievals, based on a sophisticated spatio-temporal technique. The average global monthly distribution of aerosol from MODIS has been combined with other data to calculate the southern African aerosol daily averaged (24 hr) radiative forcing over the ocean for September 2000. It is estimated that on the average, for cloud free conditions over an area of 9 million square kin, this predominantly smoke aerosol exerts a forcing of -30 W/square m C lose to the terrestrial

  1. Sensitivity-Uncertainty Based Nuclear Criticality Safety Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-09-20

    These are slides from a seminar given to the University of Mexico Nuclear Engineering Department. Whisper is a statistical analysis package developed to support nuclear criticality safety validation. It uses the sensitivity profile data for an application as computed by MCNP6 along with covariance files for the nuclear data to determine a baseline upper-subcritical-limit for the application. Whisper and its associated benchmark files are developed and maintained as part of MCNP6, and will be distributed with all future releases of MCNP6. Although sensitivity-uncertainty methods for NCS validation have been under development for 20 years, continuous-energy Monte Carlo codes such asmore » MCNP could not determine the required adjoint-weighted tallies for sensitivity profiles. The recent introduction of the iterated fission probability method into MCNP led to the rapid development of sensitivity analysis capabilities for MCNP6 and the development of Whisper. Sensitivity-uncertainty based methods represent the future for NCS validation – making full use of today’s computer power to codify past approaches based largely on expert judgment. Validation results are defensible, auditable, and repeatable as needed with different assumptions and process models. The new methods can supplement, support, and extend traditional validation approaches.« less

  2. Development, Validation, and Verification of a Self-Assessment Tool to Estimate Agnibala (Digestive Strength).

    PubMed

    Singh, Aparna; Singh, Girish; Patwardhan, Kishor; Gehlot, Sangeeta

    2017-01-01

    According to Ayurveda, the traditional system of healthcare of Indian origin, Agni is the factor responsible for digestion and metabolism. Four functional states (Agnibala) of Agni have been recognized: regular, irregular, intense, and weak. The objective of the present study was to develop and validate a self-assessment tool to estimate Agnibala The developed tool was evaluated for its reliability and validity by administering it to 300 healthy volunteers of either gender belonging to 18 to 40-year age group. Besides confirming the statistical validity and reliability, the practical utility of the newly developed tool was also evaluated by recording serum lipid parameters of all the volunteers. The results show that the lipid parameters vary significantly according to the status of Agni The tool, therefore, may be used to screen normal population to look for possible susceptibility to certain health conditions. © The Author(s) 2016.

  3. The development and validity of the Salford Gait Tool: an observation-based clinical gait assessment tool.

    PubMed

    Toro, Brigitte; Nester, Christopher J; Farren, Pauline C

    2007-03-01

    To develop the construct, content, and criterion validity of the Salford Gait Tool (SF-GT) and to evaluate agreement between gait observations using the SF-GT and kinematic gait data. Tool development and comparative evaluation. University in the United Kingdom. For designing construct and content validity, convenience samples of 10 children with hemiplegic, diplegic, and quadriplegic cerebral palsy (CP) and 152 physical therapy students and 4 physical therapists were recruited. For developing criterion validity, kinematic gait data of 13 gait clusters containing 56 children with hemiplegic, diplegic, and quadriplegic CP and 11 neurologically intact children was used. For clinical evaluation, a convenience sample of 23 pediatric physical therapists participated. We developed a sagittal plane observational gait assessment tool through a series of design, test, and redesign iterations. The tool's grading system was calibrated using kinematic gait data of 13 gait clusters and was evaluated by comparing the agreement of gait observations using the SF-GT with kinematic gait data. Criterion standard kinematic gait data. There was 58% mean agreement based on grading categories and 80% mean agreement based on degree estimations evaluated with the least significant difference method. The new SF-GT has good concurrent criterion validity.

  4. Validating GEOV3 LAI, FAPAR and vegetation cover estimates derived from PROBA-V observations at 333m over Europe

    NASA Astrophysics Data System (ADS)

    Camacho, Fernando; Sánchez, Jorge; Lacaze, Roselyne; Weiss, Marie; Baret, Frédéric; Verger, Aleixandre; Smets, Bruno; Latorre, Consuelo

    2016-04-01

    The Copernicus Global Land Service (http://land.copernicus.eu/global/) is delivering surface biophysical products derived from satellite observations at global scale. Fifteen years of LAI, FAPAR, and vegetation cover (FCOVER) products among other indicators have been generated from SPOT/VGT observations at 1 km spatial resolution (named GEOV1, GEOV2). The continuity of the service since the end of SPOT/VGT mission (May, 2014) is achieved thanks to PROBA-V, which offers observations at a finer spatial resolution (1/3 km). In the context of the FP7 ImagineS project (http://fp7-imagines.eu/), a new algorithm (Weiss et al., this conference), adapted to PROBA-V spectral and spatial characteristics, was designed to provide vegetation products (named GEOV3) as consistent as possible with GEOV1 and GEOV2 whilst providing near real-time estimates required by some users. It is based on neural network techniques completed with a data filtering and smoothing process. The near real-time estimates are improved through a consolidation period of six dekads during which observations are accumulated every new dekad. The validation of these products is mandatory to provide associated uncertainties for efficient use of this source of information. This work presents an early validation over Europe of the GEOV3 LAI, FAPAR and vegetation cover (FCOVER) products derived from PROBA-V observation at 333 m and 10-days frequency during the year 2014. The validation has been conducted in agreement with the CEOS LPV best practices for global LAI products. Several performance criteria were investigated for the several GEOV3 modes (near real-time, and successive consolidated estimates) including completeness, spatial and temporal consistency, precision and accuracy. The spatial and temporal consistency was evaluated using as reference PROBA-V GEOV1 and MODC5 1 km similar products using a network of 153 validation sites over Europe (EUVAL). The accuracy was assessed with concomitant data collected

  5. Reflectance Estimation from Urban Terrestrial Images: Validation of a Symbolic Ray-Tracing Method on Synthetic Data

    NASA Astrophysics Data System (ADS)

    Coubard, F.; Brédif, M.; Paparoditis, N.; Briottet, X.

    2011-04-01

    Terrestrial geolocalized images are nowadays widely used on the Internet, mainly in urban areas, through immersion services such as Google Street View. On the long run, we seek to enhance the visualization of these images; for that purpose, radiometric corrections must be performed to free them from illumination conditions at the time of acquisition. Given the simultaneously acquired 3D geometric model of the scene with LIDAR or vision techniques, we face an inverse problem where the illumination and the geometry of the scene are known and the reflectance of the scene is to be estimated. Our main contribution is the introduction of a symbolic ray-tracing rendering to generate parametric images, for quick evaluation and comparison with the acquired images. The proposed approach is then based on an iterative estimation of the reflectance parameters of the materials, using a single rendering pre-processing. We validate the method on synthetic data with linear BRDF models and discuss the limitations of the proposed approach with more general non-linear BRDF models.

  6. A fast estimation of shock wave pressure based on trend identification

    NASA Astrophysics Data System (ADS)

    Yao, Zhenjian; Wang, Zhongyu; Wang, Chenchen; Lv, Jing

    2018-04-01

    In this paper, a fast method based on trend identification is proposed to accurately estimate the shock wave pressure in a dynamic measurement. Firstly, the collected output signal of the pressure sensor is reconstructed by discrete cosine transform (DCT) to reduce the computational complexity for the subsequent steps. Secondly, the empirical mode decomposition (EMD) is applied to decompose the reconstructed signal into several components with different frequency-bands, and the last few low-frequency components are chosen to recover the trend of the reconstructed signal. In the meantime, the optimal component number is determined based on the correlation coefficient and the normalized Euclidean distance between the trend and the reconstructed signal. Thirdly, with the areas under the gradient curve of the trend signal, the stable interval that produces the minimum can be easily identified. As a result, the stable value of the output signal is achieved in this interval. Finally, the shock wave pressure can be estimated according to the stable value of the output signal and the sensitivity of the sensor in the dynamic measurement. A series of shock wave pressure measurements are carried out with a shock tube system to validate the performance of this method. The experimental results show that the proposed method works well in shock wave pressure estimation. Furthermore, comparative experiments also demonstrate the superiority of the proposed method over the existing approaches in both estimation accuracy and computational efficiency.

  7. Ancestry estimation and control of population stratification for sequence-based association studies.

    PubMed

    Wang, Chaolong; Zhan, Xiaowei; Bragg-Gresham, Jennifer; Kang, Hyun Min; Stambolian, Dwight; Chew, Emily Y; Branham, Kari E; Heckenlively, John; Fulton, Robert; Wilson, Richard K; Mardis, Elaine R; Lin, Xihong; Swaroop, Anand; Zöllner, Sebastian; Abecasis, Gonçalo R

    2014-04-01

    Estimating individual ancestry is important in genetic association studies where population structure leads to false positive signals, although assigning ancestry remains challenging with targeted sequence data. We propose a new method for the accurate estimation of individual genetic ancestry, based on direct analysis of off-target sequence reads, and implement our method in the publicly available LASER software. We validate the method using simulated and empirical data and show that the method can accurately infer worldwide continental ancestry when used with sequencing data sets with whole-genome shotgun coverage as low as 0.001×. For estimates of fine-scale ancestry within Europe, the method performs well with coverage of 0.1×. On an even finer scale, the method improves discrimination between exome-sequenced study participants originating from different provinces within Finland. Finally, we show that our method can be used to improve case-control matching in genetic association studies and to reduce the risk of spurious findings due to population structure.

  8. Improving satellite-based post-fire evapotranspiration estimates in semi-arid regions

    NASA Astrophysics Data System (ADS)

    Poon, P.; Kinoshita, A. M.

    2017-12-01

    Climate change and anthropogenic factors contribute to the increased frequency, duration, and size of wildfires, which can alter ecosystem and hydrological processes. The loss of vegetation canopy and ground cover reduces interception and alters evapotranspiration (ET) dynamics in riparian areas, which can impact rainfall-runoff partitioning. Previous research evaluated the spatial and temporal trends of ET based on burn severity and observed an annual decrease of 120 mm on average for three years after fire. Building upon these results, this research focuses on the Coyote Fire in San Diego, California (USA), which burned a total of 76 km2 in 2003 to calibrate and improve satellite-based ET estimates in semi-arid regions affected by wildfire. The current work utilizes satellite-based products and techniques such as the Google Earth Engine Application programming interface (API). Various ET models (ie. Operational Simplified Surface Energy Balance Model (SSEBop)) are compared to the latent heat flux from two AmeriFlux eddy covariance towers, Sky Oaks Young (US-SO3), and Old Stand (US-SO2), from 2000 - 2015. The Old Stand tower has a low burn severity and the Young Stand tower has a moderate to high burn severity. Both towers are used to validate spatial ET estimates. Furthermore, variables and indices, such as Enhanced Vegetation Index (EVI), Normalized Difference Moisture Index (NDMI), and the Normalized Burn Ratio (NBR) are utilized to evaluate satellite-based ET through a multivariate statistical analysis at both sites. This point-scale study will able to improve ET estimates in spatially diverse regions. Results from this research will contribute to the development of a post-wildfire ET model for semi-arid regions. Accurate estimates of post-fire ET will provide a better representation of vegetation and hydrologic recovery, which can be used to improve hydrologic models and predictions.

  9. A Direction of Arrival Estimation Algorithm Based on Orthogonal Matching Pursuit

    NASA Astrophysics Data System (ADS)

    Tang, Junyao; Cao, Fei; Liu, Lipeng

    2018-02-01

    The results show that the modified DSM is able to predict local buckling capacity of hot-rolled RHS and SHS accurately. In order to solve the problem of the weak ability of anti-radiation missile against active decoy in modern electronic warfare, a direction of arrival estimation algorithm based on orthogonal matching pursuit is proposed in this paper. The algorithm adopts the compression sensing technology. This paper uses array antennas to receive signals, gets the sparse representation of signals, and then designs the corresponding perception matrix. The signal is reconstructed by orthogonal matching pursuit algorithm to estimate the optimal solution. At the same time, the error of the whole measurement system is analyzed and simulated, and the validity of this algorithm is verified. The algorithm greatly reduces the measurement time, the quantity of equipment and the total amount of the calculation, and accurately estimates the angle and strength of the incoming signal. This technology can effectively improve the angle resolution of the missile, which is of reference significance to the research of anti-active decoy.

  10. Estimation of dynamic rotor loads for the rotor systems research aircraft: Methodology development and validation

    NASA Technical Reports Server (NTRS)

    Duval, R. W.; Bahrami, M.

    1985-01-01

    The Rotor Systems Research Aircraft uses load cells to isolate the rotor/transmission systm from the fuselage. A mathematical model relating applied rotor loads and inertial loads of the rotor/transmission system to the load cell response is required to allow the load cells to be used to estimate rotor loads from flight data. Such a model is derived analytically by applying a force and moment balance to the isolated rotor/transmission system. The model is tested by comparing its estimated values of applied rotor loads with measured values obtained from a ground based shake test. Discrepancies in the comparison are used to isolate sources of unmodeled external loads. Once the structure of the mathematical model has been validated by comparison with experimental data, the parameters must be identified. Since the parameters may vary with flight condition it is desirable to identify the parameters directly from the flight data. A Maximum Likelihood identification algorithm is derived for this purpose and tested using a computer simulation of load cell data. The identification is found to converge within 10 samples. The rapid convergence facilitates tracking of time varying parameters of the load cell model in flight.

  11. The Effects of Baseline Estimation on the Reliability, Validity, and Precision of CBM-R Growth Estimates

    ERIC Educational Resources Information Center

    Van Norman, Ethan R.; Christ, Theodore J.; Zopluoglu, Cengiz

    2013-01-01

    This study examined the effect of baseline estimation on the quality of trend estimates derived from Curriculum Based Measurement of Oral Reading (CBM-R) progress monitoring data. The authors used a linear mixed effects regression (LMER) model to simulate progress monitoring data for schedules ranging from 6-20 weeks for datasets with high and low…

  12. Assessing the Relative Performance of Microwave-based Satellite Rain Rate Retrievals using TRMM Ground Validation Data

    NASA Technical Reports Server (NTRS)

    Wolff, David B.; Fisher, Brad L.

    2008-01-01

    Space-borne microwave sensors provide critical rain information used in several global multi-satellite rain products, which in turn are used for a variety of important studies, including landslide forecasting, flash flood warning, data assimilation, climate studies, and validation of model forecast of precipitation. This study employs four years (2003-2006) of satellite data to assess the relative performance and skill of SSM/I (F13, F14 and F15), AMSU-B (N15, N16 and N17), AMSR-E (AQUA) and the TRMM Microwave Imager (TMI) in estimating surface rainfall based on direct instantaneous comparison with ground-based rain estimates from Tropical Rainfall Measuring Mission (TRMM) Ground Validation (GV) sites at Kwajalein, Republic of the Marshall Islands (KWAJ) and Melbourne, Florida (MELB). The relative performance of each of these satellites is examined via comparisons with GV radar-based rain rate estimates. Because underlying surface terrain is known to affect the relative performance of the satellite algorithms, the data for MELB was further stratified into ocean, land and coast categories using a 0.25 terrain mask. Of all the satellite estimates compared in this study, TMI and AMSR-E exhibited considerably higher correlations and skills in estimating/observing surface precipitation. While SSM/I and AMSU-B exhibited lower correlations and skills for each of the different terrain categories, the SSM/I absolute biases trended slightly lower than AMSRE over ocean, where the observations from both emission and scattering channels were used in the retrievals. AMSU-B exhibited the least skill relative to GV in all of the relevant statistical categories, and an anomalous spike was observed in the probability distribution functions near 1.0 mm hr-1. This statistical artifact appears to be related to attempts by algorithm developers to include some lighter rain rates, not easily detectable by its scatter-only frequencies. AMSU-B, however, agreed well with GV when the matching

  13. Development and validation of a self-administered questionnaire to estimate the distance and mode of children's travel to school in urban India.

    PubMed

    Tetali, Shailaja; Edwards, Phil; Murthy, G V S; Roberts, I

    2015-10-28

    Although some 300 million Indian children travel to school every day, little is known about how they get there. This information is important for transport planners and public health authorities. This paper presents the development of a self-administered questionnaire and examines its reliability and validity in estimating distance and mode of travel to school in a low resource urban setting. We developed a questionnaire on children's travel to school. We assessed test re-test reliability by repeating the questionnaire one week later (n = 61). The questionnaire was improved and re-tested (n = 68). We examined the convergent validity of distance estimates by comparing estimates based on the nearest landmark to children's homes with a 'gold standard' based on one-to-one interviews with children using detailed maps (n = 50). Most questions showed fair to almost perfect agreement. Questions on usual mode of travel (κ 0.73- 0.84) and road injury (κ 0.61- 0.72) were found to be more reliable than those on parental permissions (κ 0.18- 0.30), perception of safety (κ 0.00- 0.54), and physical activity (κ -0.01- 0.07). The distance estimated by the nearest landmark method was not significantly different than the in-depth method for walking , 52 m [95 % CI -32 m to 135 m], 10 % of the mean difference, and for walking and cycling combined, 65 m [95 % CI -30 m to 159 m], 11 % of the mean difference. For children who used motorized transport (excluding private school bus), the nearest landmark method under-estimated distance by an average of 325 metres [95 % CI -664 m to 1314 m], 15 % of the mean difference. A self-administered questionnaire was found to provide reliable information on the usual mode of travel to school, and road injury, in a small sample of children in Hyderabad, India. The 'nearest landmark' method can be applied in similar low-resource settings, for a reasonably accurate estimate of the distance from a child's home to school.

  14. An optimal-estimation-based aerosol retrieval algorithm using OMI near-UV observations

    NASA Astrophysics Data System (ADS)

    Jeong, U.; Kim, J.; Ahn, C.; Torres, O.; Liu, X.; Bhartia, P. K.; Spurr, R. J. D.; Haffner, D.; Chance, K.; Holben, B. N.

    2016-01-01

    An optimal-estimation(OE)-based aerosol retrieval algorithm using the OMI (Ozone Monitoring Instrument) near-ultraviolet observation was developed in this study. The OE-based algorithm has the merit of providing useful estimates of errors simultaneously with the inversion products. Furthermore, instead of using the traditional look-up tables for inversion, it performs online radiative transfer calculations with the VLIDORT (linearized pseudo-spherical vector discrete ordinate radiative transfer code) to eliminate interpolation errors and improve stability. The measurements and inversion products of the Distributed Regional Aerosol Gridded Observation Network campaign in northeast Asia (DRAGON NE-Asia 2012) were used to validate the retrieved aerosol optical thickness (AOT) and single scattering albedo (SSA). The retrieved AOT and SSA at 388 nm have a correlation with the Aerosol Robotic Network (AERONET) products that is comparable to or better than the correlation with the operational product during the campaign. The OE-based estimated error represented the variance of actual biases of AOT at 388 nm between the retrieval and AERONET measurements better than the operational error estimates. The forward model parameter errors were analyzed separately for both AOT and SSA retrievals. The surface reflectance at 388 nm, the imaginary part of the refractive index at 354 nm, and the number fine-mode fraction (FMF) were found to be the most important parameters affecting the retrieval accuracy of AOT, while FMF was the most important parameter for the SSA retrieval. The additional information provided with the retrievals, including the estimated error and degrees of freedom, is expected to be valuable for relevant studies. Detailed advantages of using the OE method were described and discussed in this paper.

  15. MotiveValidator: interactive web-based validation of ligand and residue structure in biomolecular complexes.

    PubMed

    Vařeková, Radka Svobodová; Jaiswal, Deepti; Sehnal, David; Ionescu, Crina-Maria; Geidl, Stanislav; Pravda, Lukáš; Horský, Vladimír; Wimmerová, Michaela; Koča, Jaroslav

    2014-07-01

    Structure validation has become a major issue in the structural biology community, and an essential step is checking the ligand structure. This paper introduces MotiveValidator, a web-based application for the validation of ligands and residues in PDB or PDBx/mmCIF format files provided by the user. Specifically, MotiveValidator is able to evaluate in a straightforward manner whether the ligand or residue being studied has a correct annotation (3-letter code), i.e. if it has the same topology and stereochemistry as the model ligand or residue with this annotation. If not, MotiveValidator explicitly describes the differences. MotiveValidator offers a user-friendly, interactive and platform-independent environment for validating structures obtained by any type of experiment. The results of the validation are presented in both tabular and graphical form, facilitating their interpretation. MotiveValidator can process thousands of ligands or residues in a single validation run that takes no more than a few minutes. MotiveValidator can be used for testing single structures, or the analysis of large sets of ligands or fragments prepared for binding site analysis, docking or virtual screening. MotiveValidator is freely available via the Internet at http://ncbr.muni.cz/MotiveValidator. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. Spectrum-based estimators of the bivariate Hurst exponent

    NASA Astrophysics Data System (ADS)

    Kristoufek, Ladislav

    2014-12-01

    We discuss two alternate spectrum-based estimators of the bivariate Hurst exponent in the power-law cross-correlations setting, the cross-periodogram and local X -Whittle estimators, as generalizations of their univariate counterparts. As the spectrum-based estimators are dependent on a part of the spectrum taken into consideration during estimation, a simulation study showing performance of the estimators under varying bandwidth parameter as well as correlation between processes and their specification is provided as well. These estimators are less biased than the already existent averaged periodogram estimator, which, however, has slightly lower variance. The spectrum-based estimators can serve as a good complement to the popular time domain estimators.

  17. Validation of a skinfold based index for tracking proportional changes in lean mass

    PubMed Central

    Slater, G J; Duthie, G M; Pyne, D B; Hopkins, W G

    2006-01-01

    Background The lean mass index (LMI) is a new empirical measure that tracks within‐subject proportional changes in body mass adjusted for changes in skinfold thickness. Objective To compare the ability of the LMI and other skinfold derived measures of lean mass to monitor changes in lean mass. Methods 20 elite rugby union players undertook full anthropometric profiles on two occasions 10 weeks apart to calculate the LMI and five skinfold based measures of lean mass. Hydrodensitometry, deuterium dilution, and dual energy x ray absorptiometry provided a criterion choice, four compartment (4C) measure of lean mass for validation purposes. Regression based measures of validity, derived for within‐subject proportional changes through log transformation, included correlation coefficients and standard errors of the estimate. Results The correlation between change scores for the LMI and 4C lean mass was moderate (0.37, 90% confidence interval −0.01 to 0.66) and similar to the correlations for the other practical measures of lean mass (range 0.26 to 0.42). Standard errors of the estimate for the practical measures were in the range of 2.8–2.9%. The LMI correctly identified the direction of change in 4C lean mass for 14 of the 20 athletes, compared with 11 to 13 for the other practical measures of lean mass. Conclusions The LMI is probably as good as other skinfold based measures for tracking lean mass and is theoretically more appropriate. Given the impracticality of the 4C criterion measure for routine field use, the LMI may offer a convenient alternative for monitoring physique changes, provided its utility is established under various conditions. PMID:16505075

  18. Vision-based stress estimation model for steel frame structures with rigid links

    NASA Astrophysics Data System (ADS)

    Park, Hyo Seon; Park, Jun Su; Oh, Byung Kwan

    2017-07-01

    This paper presents a stress estimation model for the safety evaluation of steel frame structures with rigid links using a vision-based monitoring system. In this model, the deformed shape of a structure under external loads is estimated via displacements measured by a motion capture system (MCS), which is a non-contact displacement measurement device. During the estimation of the deformed shape, the effective lengths of the rigid link ranges in the frame structure are identified. The radius of the curvature of the structural member to be monitored is calculated using the estimated deformed shape and is employed to estimate stress. Using MCS in the presented model, the safety of a structure can be assessed gauge-freely. In addition, because the stress is directly extracted from the radius of the curvature obtained from the measured deformed shape, information on the loadings and boundary conditions of the structure are not required. Furthermore, the model, which includes the identification of the effective lengths of the rigid links, can consider the influences of the stiffness of the connection and support on the deformation in the stress estimation. To verify the applicability of the presented model, static loading tests for a steel frame specimen were conducted. By comparing the stress estimated by the model with the measured stress, the validity of the model was confirmed.

  19. How do we estimate survival? External validation of a tool for survival estimation in patients with metastatic bone disease-decision analysis and comparison of three international patient populations.

    PubMed

    Piccioli, Andrea; Spinelli, M Silvia; Forsberg, Jonathan A; Wedin, Rikard; Healey, John H; Ippolito, Vincenzo; Daolio, Primo Andrea; Ruggieri, Pietro; Maccauro, Giulio; Gasbarrini, Alessandro; Biagini, Roberto; Piana, Raimondo; Fazioli, Flavio; Luzzati, Alessandro; Di Martino, Alberto; Nicolosi, Francesco; Camnasio, Francesco; Rosa, Michele Attilio; Campanacci, Domenico Andrea; Denaro, Vincenzo; Capanna, Rodolfo

    2015-05-22

    We recently developed a clinical decision support tool, capable of estimating the likelihood of survival at 3 and 12 months following surgery for patients with operable skeletal metastases. After making it publicly available on www.PATHFx.org , we attempted to externally validate it using independent, international data. We collected data from patients treated at 13 Italian orthopaedic oncology referral centers between 2010 and 2013, then applied to PATHFx, which generated a probability of survival at three and 12-months for each patient. We assessed accuracy using the area under the receiver-operating characteristic curve (AUC), clinical utility using Decision Curve Analysis (DCA), and compared the Italian patient data to the training set (United States) and first external validation set (Scandinavia). The Italian dataset contained 287 records with at least 12 months follow-up information. The AUCs for the three-month and 12-month estimates was 0.80 and 0.77, respectively. There were missing data, including the surgeon's estimate of survival that was missing in the majority of records. Physiologically, Italian patients were similar to patients in the training and first validation sets. However notable differences were observed in the proportion of those surviving three and 12-months, suggesting differences in referral patterns and perhaps indications for surgery. PATHFx was successfully validated in an Italian dataset containing missing data. This study demonstrates its broad applicability to European patients, even in centers with differing treatment philosophies from those previously studied.

  20. QbD-Based Development and Validation of a Stability-Indicating HPLC Method for Estimating Ketoprofen in Bulk Drug and Proniosomal Vesicular System.

    PubMed

    Yadav, Nand K; Raghuvanshi, Ashish; Sharma, Gajanand; Beg, Sarwar; Katare, Om P; Nanda, Sanju

    2016-03-01

    The current studies entail systematic quality by design (QbD)-based development of simple, precise, cost-effective and stability-indicating high-performance liquid chromatography method for estimation of ketoprofen. Analytical target profile was defined and critical analytical attributes (CAAs) were selected. Chromatographic separation was accomplished with an isocratic, reversed-phase chromatography using C-18 column, pH 6.8, phosphate buffer-methanol (50 : 50v/v) as a mobile phase at a flow rate of 1.0 mL/min and UV detection at 258 nm. Systematic optimization of chromatographic method was performed using central composite design by evaluating theoretical plates and peak tailing as the CAAs. The method was validated as per International Conference on Harmonization guidelines with parameters such as high sensitivity, specificity of the method with linearity ranging between 0.05 and 250 µg/mL, detection limit of 0.025 µg/mL and quantification limit of 0.05 µg/mL. Precision was demonstrated using relative standard deviation of 1.21%. Stress degradation studies performed using acid, base, peroxide, thermal and photolytic methods helped in identifying the degradation products in the proniosome delivery systems. The results successfully demonstrated the utility of QbD for optimizing the chromatographic conditions for developing highly sensitive liquid chromatographic method for ketoprofen. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Estimation of nonpaternity in the Mexican population of Nuevo Leon: a validation study with blood group markers.

    PubMed

    Cerda-Flores, R M; Barton, S A; Marty-Gonzalez, L F; Rivas, F; Chakraborty, R

    1999-07-01

    A method for estimating the general rate of nonpaternity in a population was validated using phenotype data on seven blood groups (A1A2BO, MNSs, Rh, Duffy, Lutheran, Kidd, and P) on 396 mother, child, and legal father trios from Nuevo León, Mexico. In all, 32 legal fathers were excluded as the possible father based on genetic exclusions at one or more loci (combined average exclusion probability of 0.694 for specific mother-child phenotype pairs). The maximum likelihood estimate of the general nonpaternity rate in the population was 0.118 +/- 0.020. The nonpaternity rates in Nuevo León were also seen to be inversely related with the socioeconomic status of the families, i.e., the highest in the low and the lowest in the high socioeconomic class. We further argue that with the moderately low (69.4%) power of exclusion for these seven blood group systems, the traditional critical values of paternity index (PI > or = 19) were not good indicators of true paternity, since a considerable fraction (307/364) of nonexcluded legal fathers had a paternity index below 19 based on the seven markers. Implications of these results in the context of genetic-epidemiological studies as well as for detection of true fathers for child-support adjudications are discussed, implying the need to employ a battery of genetic markers (possibly DNA-based tests) that yield a higher power of exclusion. We conclude that even though DNA markers are more informative, the probabilistic approach developed here would still be needed to estimate the true rate of nonpaternity in a population or to evaluate the precision of detecting true fathers.

  2. Validation of the alternating conditional estimation algorithm for estimation of flexible extensions of Cox's proportional hazards model with nonlinear constraints on the parameters.

    PubMed

    Wynant, Willy; Abrahamowicz, Michal

    2016-11-01

    Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Impact of sample collection participation on the validity of estimated measures of association in the National Birth Defects Prevention Study when assessing gene-environment interactions.

    PubMed

    Jenkins, Mary M; Reefhuis, Jennita; Herring, Amy H; Honein, Margaret A

    2017-12-01

    To better understand the impact that nonresponse for specimen collection has on the validity of estimates of association, we examined associations between self-reported maternal periconceptional smoking, folic acid use, or pregestational diabetes mellitus and six birth defects among families who did and did not submit buccal cell samples for DNA following a telephone interview as part of the National Birth Defects Prevention Study (NBDPS). Analyses included control families with live born infants who had no birth defects (N = 9,465), families of infants with anorectal atresia or stenosis (N = 873), limb reduction defects (N = 1,037), gastroschisis (N = 1,090), neural tube defects (N = 1,764), orofacial clefts (N = 3,836), or septal heart defects (N = 4,157). Estimated dates of delivery were between 1997 and 2009. For each exposure and birth defect, odds ratios and 95% confidence intervals were calculated using logistic regression stratified by race-ethnicity and sample collection status. Tests for interaction were applied to identify potential differences between estimated measures of association based on sample collection status. Significant differences in estimated measures of association were observed in only four of 48 analyses with sufficient sample sizes. Despite lower than desired participation rates in buccal cell sample collection, this validation provides some reassurance that the estimates obtained for sample collectors and noncollectors are comparable. These findings support the validity of observed associations in gene-environment interaction studies for the selected exposures and birth defects among NBDPS participants who submitted DNA samples. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  4. Assessing the Relative Performance of Microwave-Based Satellite Rain Rate Retrievals Using TRMM Ground Validation Data

    NASA Technical Reports Server (NTRS)

    Wolff, David B.; Fisher, Brad L.

    2010-01-01

    Space-borne microwave sensors provide critical rain information used in several global multi-satellite rain products, which in turn are used for a variety of important studies, including landslide forecasting, flash flood warning, data assimilation, climate studies, and validation of model forecasts of precipitation. This study employs four years (2003-2006) of satellite data to assess the relative performance and skill of SSM/I (F13, F14 and F15), AMSU-B (N15, N16 and N17), AMSR-E (Aqua) and the TRMM Microwave Imager (TMI) in estimating surface rainfall based on direct instantaneous comparisons with ground-based rain estimates from Tropical Rainfall Measuring Mission (TRMM) Ground Validation (GV) sites at Kwajalein, Republic of the Marshall Islands (KWAJ) and Melbourne, Florida (MELB). The relative performance of each of these satellite estimates is examined via comparisons with space- and time-coincident GV radar-based rain rate estimates. Because underlying surface terrain is known to affect the relative performance of the satellite algorithms, the data for MELB was further stratified into ocean, land and coast categories using a 0.25 terrain mask. Of all the satellite estimates compared in this study, TMI and AMSR-E exhibited considerably higher correlations and skills in estimating/observing surface precipitation. While SSM/I and AMSU-B exhibited lower correlations and skills for each of the different terrain categories, the SSM/I absolute biases trended slightly lower than AMSRE over ocean, where the observations from both emission and scattering channels were used in the retrievals. AMSU-B exhibited the least skill relative to GV in all of the relevant statistical categories, and an anomalous spike was observed in the probability distribution functions near 1.0 mm/hr. This statistical artifact appears to be related to attempts by algorithm developers to include some lighter rain rates, not easily detectable by its scatter-only frequencies. AMSU-B, however

  5. Assessing the Relative Performance of Microwave-Based Satellite Rain Rate Retrievals Using TRMM Ground Validation Data

    NASA Technical Reports Server (NTRS)

    Wolff, David B.; Fisher, Brad L.

    2011-01-01

    Space-borne microwave sensors provide critical rain information used in several global multi-satellite rain products, which in turn are used for a variety of important studies, including landslide forecasting, flash flood warning, data assimilation, climate studies, and validation of model forecasts of precipitation. This study employs four years (2003-2006) of satellite data to assess the relative performance and skill of SSM/I (F13, F14 and F15), AMSU-B (N15, N16 and N17), AMSR-E (Aqua) and the TRMM Microwave Imager (TMI) in estimating surface rainfall based on direct instantaneous comparisons with ground-based rain estimates from Tropical Rainfall Measuring Mission (TRMM) Ground Validation (GV) sites at Kwajalein, Republic of the Marshall Islands (KWAJ) and Melbourne, Florida (MELB). The relative performance of each of these satellite estimates is examined via comparisons with space- and time-coincident GV radar-based rain rate estimates. Because underlying surface terrain is known to affect the relative performance of the satellite algorithms, the data for MELB was further stratified into ocean, land and coast categories using a 0.25deg terrain mask. Of all the satellite estimates compared in this study, TMI and AMSR-E exhibited considerably higher correlations and skills in estimating/observing surface precipitation. While SSM/I and AMSU-B exhibited lower correlations and skills for each of the different terrain categories, the SSM/I absolute biases trended slightly lower than AMSR-E over ocean, where the observations from both emission and scattering channels were used in the retrievals. AMSU-B exhibited the least skill relative to GV in all of the relevant statistical categories, and an anomalous spike was observed in the probability distribution functions near 1.0 mm/hr. This statistical artifact appears to be related to attempts by algorithm developers to include some lighter rain rates, not easily detectable by its scatter-only frequencies. AMSU

  6. Validation of the Rapid Estimate for Adolescent Literacy in Medicine Short Form (REALM-TeenS)

    PubMed Central

    Colvin, Kimberly F.; Chisolm, Deena J.; Arnold, Connie; Hancock, Jill; Davis, Terry

    2017-01-01

    BACKGROUND: This study was designed to develop and validate a brief adolescent health literacy assessment tool (Rapid Estimate of Adolescent Literacy in Medicine Short Form [REALM-TeenS]). METHODS: We combined datasets from 2 existing research studies that used the REALM-Teen (n = 665) and conducted an item response theory analysis. The correlation between scores on the original 66-item REALM-Teen and the proposed REALM-TeenS was calculated, along with the decision consistency across forms with respect to grade level assignment of each adolescent and coefficient α. The proposed REALM-TeenS was validated with original REALM-Teen data from a third independent study (n = 174). RESULTS: Items with the largest discriminations across the scale, from low to high health literacy, were selected for inclusion in REALM-TeenS. From those, a set of 10 items was selected that maintained a reasonable level of SE across ability estimates and correlated highly (r = 0.92) with the original REALM-Teen scores. The coefficient α for the 10-item REALM-TeenS was .82. There was no evidence of model misfit (root mean square error of approximation < 0.001). In the validation sample, REALM-TeenS scores correlated highly with scores on the original REALM-Teen (r = 0.92), and the decision consistency across both forms was 80%. In pilot testing, administration took ∼20 seconds. CONCLUSIONS: The REALM-TeenS offers researchers and clinicians a brief validated screening tool that can be used to assess adolescent health literacy in a variety of settings. Scoring guidelines ensure that reading level assessment is appropriate by age and grade. PMID:28557740

  7. Validation of the Rapid Estimate for Adolescent Literacy in Medicine Short Form (REALM-TeenS).

    PubMed

    Manganello, Jennifer A; Colvin, Kimberly F; Chisolm, Deena J; Arnold, Connie; Hancock, Jill; Davis, Terry

    2017-05-01

    This study was designed to develop and validate a brief adolescent health literacy assessment tool (Rapid Estimate of Adolescent Literacy in Medicine Short Form [REALM-TeenS]). We combined datasets from 2 existing research studies that used the REALM-Teen ( n = 665) and conducted an item response theory analysis. The correlation between scores on the original 66-item REALM-Teen and the proposed REALM-TeenS was calculated, along with the decision consistency across forms with respect to grade level assignment of each adolescent and coefficient α. The proposed REALM-TeenS was validated with original REALM-Teen data from a third independent study ( n = 174). Items with the largest discriminations across the scale, from low to high health literacy, were selected for inclusion in REALM-TeenS. From those, a set of 10 items was selected that maintained a reasonable level of SE across ability estimates and correlated highly ( r = 0.92) with the original REALM-Teen scores. The coefficient α for the 10-item REALM-TeenS was .82. There was no evidence of model misfit (root mean square error of approximation < 0.001). In the validation sample, REALM-TeenS scores correlated highly with scores on the original REALM-Teen ( r = 0.92), and the decision consistency across both forms was 80%. In pilot testing, administration took ∼20 seconds. The REALM-TeenS offers researchers and clinicians a brief validated screening tool that can be used to assess adolescent health literacy in a variety of settings. Scoring guidelines ensure that reading level assessment is appropriate by age and grade. Copyright © 2017 by the American Academy of Pediatrics.

  8. Estimation of cardiac reserve by peak power: validation and initial application of a simplified index

    NASA Technical Reports Server (NTRS)

    Armstrong, G. P.; Carlier, S. G.; Fukamachi, K.; Thomas, J. D.; Marwick, T. H.

    1999-01-01

    OBJECTIVES: To validate a simplified estimate of peak power (SPP) against true (invasively measured) peak instantaneous power (TPP), to assess the feasibility of measuring SPP during exercise and to correlate this with functional capacity. DESIGN: Development of a simplified method of measurement and observational study. SETTING: Tertiary referral centre for cardiothoracic disease. SUBJECTS: For validation of SPP with TPP, seven normal dogs and four dogs with dilated cardiomyopathy were studied. To assess feasibility and clinical significance in humans, 40 subjects were studied (26 patients; 14 normal controls). METHODS: In the animal validation study, TPP was derived from ascending aortic pressure and flow probe, and from Doppler measurements of flow. SPP, calculated using the different flow measures, was compared with peak instantaneous power under different loading conditions. For the assessment in humans, SPP was measured at rest and during maximum exercise. Peak aortic flow was measured with transthoracic continuous wave Doppler, and systolic and diastolic blood pressures were derived from brachial sphygmomanometry. The difference between exercise and rest simplified peak power (Delta SPP) was compared with maximum oxygen uptake (VO(2)max), measured from expired gas analysis. RESULTS: SPP estimates using peak flow measures correlated well with true peak instantaneous power (r = 0.89 to 0.97), despite marked changes in systemic pressure and flow induced by manipulation of loading conditions. In the human study, VO(2)max correlated with Delta SPP (r = 0.78) better than Delta ejection fraction (r = 0.18) and Delta rate-pressure product (r = 0.59). CONCLUSIONS: The simple product of mean arterial pressure and peak aortic flow (simplified peak power, SPP) correlates with peak instantaneous power over a range of loading conditions in dogs. In humans, it can be estimated during exercise echocardiography, and correlates with maximum oxygen uptake better than ejection

  9. Robust Foot Clearance Estimation Based on the Integration of Foot-Mounted IMU Acceleration Data

    PubMed Central

    Benoussaad, Mourad; Sijobert, Benoît; Mombaur, Katja; Azevedo Coste, Christine

    2015-01-01

    This paper introduces a method for the robust estimation of foot clearance during walking, using a single inertial measurement unit (IMU) placed on the subject’s foot. The proposed solution is based on double integration and drift cancellation of foot acceleration signals. The method is insensitive to misalignment of IMU axes with respect to foot axes. Details are provided regarding calibration and signal processing procedures. Experimental validation was performed on 10 healthy subjects under three walking conditions: normal, fast and with obstacles. Foot clearance estimation results were compared to measurements from an optical motion capture system. The mean error between them is significantly less than 15% under the various walking conditions. PMID:26703622

  10. Estimation of skull table thickness with clinical CT and validation with microCT.

    PubMed

    Lillie, Elizabeth M; Urban, Jillian E; Weaver, Ashley A; Powers, Alexander K; Stitzel, Joel D

    2015-01-01

    Brain injuries resulting from motor vehicle crashes (MVC) are extremely common yet the details of the mechanism of injury remain to be well characterized. Skull deformation is believed to be a contributing factor to some types of traumatic brain injury (TBI). Understanding biomechanical contributors to skull deformation would provide further insight into the mechanism of head injury resulting from blunt trauma. In particular, skull thickness is thought be a very important factor governing deformation of the skull and its propensity for fracture. Current computed tomography (CT) technology is limited in its ability to accurately measure cortical thickness using standard techniques. A method to evaluate cortical thickness using cortical density measured from CT data has been developed previously. This effort validates this technique for measurement of skull table thickness in clinical head CT scans using two postmortem human specimens. Bone samples were harvested from the skulls of two cadavers and scanned with microCT to evaluate the accuracy of the estimated cortical thickness measured from clinical CT. Clinical scans were collected at 0.488 and 0.625 mm in plane resolution with 0.625 mm thickness. The overall cortical thickness error was determined to be 0.078 ± 0.58 mm for cortical samples thinner than 4 mm. It was determined that 91.3% of these differences fell within the scanner resolution. Color maps of clinical CT thickness estimations are comparable to color maps of microCT thickness measurements, indicating good quantitative agreement. These data confirm that the cortical density algorithm successfully estimates skull table thickness from clinical CT scans. The application of this technique to clinical CT scans enables evaluation of cortical thickness in population-based studies. © 2014 Anatomical Society.

  11. A 45-Second Self-Test for Cardiorespiratory Fitness: Heart Rate-Based Estimation in Healthy Individuals

    PubMed Central

    Bonato, Matteo; Papini, Gabriele; Bosio, Andrea; Mohammed, Rahil A.; Bonomi, Alberto G.; Moore, Jonathan P.; Merati, Giampiero; La Torre, Antonio; Kubis, Hans-Peter

    2016-01-01

    Cardio-respiratory fitness (CRF) is a widespread essential indicator in Sports Science as well as in Sports Medicine. This study aimed to develop and validate a prediction model for CRF based on a 45 second self-test, which can be conducted anywhere. Criterion validity, test re-test study was set up to accomplish our objectives. Data from 81 healthy volunteers (age: 29 ± 8 years, BMI: 24.0 ± 2.9), 18 of whom females, were used to validate this test against gold standard. Nineteen volunteers repeated this test twice in order to evaluate its repeatability. CRF estimation models were developed using heart rate (HR) features extracted from the resting, exercise, and the recovery phase. The most predictive HR feature was the intercept of the linear equation fitting the HR values during the recovery phase normalized for the height2 (r2 = 0.30). The Ruffier-Dickson Index (RDI), which was originally developed for this squat test, showed a negative significant correlation with CRF (r = -0.40), but explained only 15% of the variability in CRF. A multivariate model based on RDI and sex, age and height increased the explained variability up to 53% with a cross validation (CV) error of 0.532 L ∙ min-1 and substantial repeatability (ICC = 0.91). The best predictive multivariate model made use of the linear intercept of HR at the beginning of the recovery normalized for height2 and age2; this had an adjusted r2 = 0. 59, a CV error of 0.495 L·min-1 and substantial repeatability (ICC = 0.93). It also had a higher agreement in classifying CRF levels (κ = 0.42) than RDI-based model (κ = 0.29). In conclusion, this simple 45 s self-test can be used to estimate and classify CRF in healthy individuals with moderate accuracy and large repeatability when HR recovery features are included. PMID:27959935

  12. A 45-Second Self-Test for Cardiorespiratory Fitness: Heart Rate-Based Estimation in Healthy Individuals.

    PubMed

    Sartor, Francesco; Bonato, Matteo; Papini, Gabriele; Bosio, Andrea; Mohammed, Rahil A; Bonomi, Alberto G; Moore, Jonathan P; Merati, Giampiero; La Torre, Antonio; Kubis, Hans-Peter

    2016-01-01

    Cardio-respiratory fitness (CRF) is a widespread essential indicator in Sports Science as well as in Sports Medicine. This study aimed to develop and validate a prediction model for CRF based on a 45 second self-test, which can be conducted anywhere. Criterion validity, test re-test study was set up to accomplish our objectives. Data from 81 healthy volunteers (age: 29 ± 8 years, BMI: 24.0 ± 2.9), 18 of whom females, were used to validate this test against gold standard. Nineteen volunteers repeated this test twice in order to evaluate its repeatability. CRF estimation models were developed using heart rate (HR) features extracted from the resting, exercise, and the recovery phase. The most predictive HR feature was the intercept of the linear equation fitting the HR values during the recovery phase normalized for the height2 (r2 = 0.30). The Ruffier-Dickson Index (RDI), which was originally developed for this squat test, showed a negative significant correlation with CRF (r = -0.40), but explained only 15% of the variability in CRF. A multivariate model based on RDI and sex, age and height increased the explained variability up to 53% with a cross validation (CV) error of 0.532 L ∙ min-1 and substantial repeatability (ICC = 0.91). The best predictive multivariate model made use of the linear intercept of HR at the beginning of the recovery normalized for height2 and age2; this had an adjusted r2 = 0. 59, a CV error of 0.495 L·min-1 and substantial repeatability (ICC = 0.93). It also had a higher agreement in classifying CRF levels (κ = 0.42) than RDI-based model (κ = 0.29). In conclusion, this simple 45 s self-test can be used to estimate and classify CRF in healthy individuals with moderate accuracy and large repeatability when HR recovery features are included.

  13. Validity of Three-Dimensional Photonic Scanning Technique for Estimating Percent Body Fat.

    PubMed

    Shitara, K; Kanehisa, H; Fukunaga, T; Yanai, T; Kawakami, Y

    2013-01-01

    Three-dimensional photonic scanning (3DPS) was recently developed to measure dimensions of a human body surface. The purpose of this study was to explore the validity of body volume measured by 3DPS for estimating the percent body fat (%fat). Design, setting, participants, and measurement: The body volumes were determined by 3DPS in 52 women. The body volume was corrected for residual lung volume. The %fat was estimated from body density and compared with the corresponding reference value determined by the dual-energy x-ray absorptiometry (DXA). No significant difference was found for the mean values of %fat obtained by 3DPS (22.2 ± 7.6%) and DXA (23.5 ± 4.9%). The root mean square error of %fat between 3DPS and reference technique was 6.0%. For each body segment, there was a significant positive correlation between 3DPS- and DXA-values, although the corresponding value for the head was slightly larger in 3DPS than in DXA. Residual lung volume was negatively correlated with the estimated error in %fat. The body volume determined with 3DPS is potentially useful for estimating %fat. A possible strategy for enhancing the measurement accuracy of %fat might be to refine the protocol for preparing the subject's hair prior to scanning and to improve the accuracy in the measurement of residual lung volume.

  14. Validity and reliability of central blood pressure estimated by upper arm oscillometric cuff pressure.

    PubMed

    Climie, Rachel E D; Schultz, Martin G; Nikolic, Sonja B; Ahuja, Kiran D K; Fell, James W; Sharman, James E

    2012-04-01

    Noninvasive central blood pressure (BP) independently predicts mortality, but current methods are operator-dependent, requiring skill to obtain quality recordings. The aims of this study were first, to determine the validity of an automatic, upper arm oscillometric cuff method for estimating central BP (O(CBP)) by comparison with the noninvasive reference standard of radial tonometry (T(CBP)). Second, we determined the intratest and intertest reliability of O(CBP). To assess validity, central BP was estimated by O(CBP) (Pulsecor R6.5B monitor) and compared with T(CBP) (SphygmoCor) in 47 participants free from cardiovascular disease (aged 57 ± 9 years) in supine, seated, and standing positions. Brachial mean arterial pressure (MAP) and diastolic BP (DBP) from the O(CBP) device were used to calibrate in both devices. Duplicate measures were recorded in each position on the same day to assess intratest reliability, and participants returned within 10 ± 7 days for repeat measurements to assess intertest reliability. There was a strong intraclass correlation (ICC = 0.987, P < 0.001) and small mean difference (1.2 ± 2.2 mm Hg) for central systolic BP (SBP) determined by O(CBP) compared with T(CBP). Ninety-six percent of all comparisons (n = 495 acceptable recordings) were within 5 mm Hg. With respect to reliability, there were strong correlations but higher limits of agreement for the intratest (ICC = 0.975, P < 0.001, mean difference 0.6 ± 4.5 mm Hg) and intertest (ICC = 0.895, P < 0.001, mean difference 4.3 ± 8.0 mm Hg) comparisons. Estimation of central SBP using cuff oscillometry is comparable to radial tonometry and has good reproducibility. As a noninvasive, relatively operator-independent method, O(CBP) may be as useful as T(CBP) for estimating central BP in clinical practice.

  15. Satellite Based Soil Moisture Product Validation Using NOAA-CREST Ground and L-Band Observations

    NASA Astrophysics Data System (ADS)

    Norouzi, H.; Campo, C.; Temimi, M.; Lakhankar, T.; Khanbilvardi, R.

    2015-12-01

    Soil moisture content is among most important physical parameters in hydrology, climate, and environmental studies. Many microwave-based satellite observations have been utilized to estimate this parameter. The Advanced Microwave Scanning Radiometer 2 (AMSR2) is one of many remotely sensors that collects daily information of land surface soil moisture. However, many factors such as ancillary data and vegetation scattering can affect the signal and the estimation. Therefore, this information needs to be validated against some "ground-truth" observations. NOAA - Cooperative Remote Sensing and Technology (CREST) center at the City University of New York has a site located at Millbrook, NY with several insitu soil moisture probes and an L-Band radiometer similar to Soil Moisture Passive and Active (SMAP) one. This site is among SMAP Cal/Val sites. Soil moisture information was measured at seven different locations from 2012 to 2015. Hydra probes are used to measure six of these locations. This study utilizes the observations from insitu data and the L-Band radiometer close to ground (at 3 meters height) to validate and to compare soil moisture estimates from AMSR2. Analysis of the measurements and AMSR2 indicated a weak correlation with the hydra probes and a moderate correlation with Cosmic-ray Soil Moisture Observing System (COSMOS probes). Several differences including the differences between pixel size and point measurements can cause these discrepancies. Some interpolation techniques are used to expand point measurements from 6 locations to AMSR2 footprint. Finally, the effect of penetration depth in microwave signal and inconsistencies with other ancillary data such as skin temperature is investigated to provide a better understanding in the analysis. The results show that the retrieval algorithm of AMSR2 is appropriate under certain circumstances. This validation algorithm and similar study will be conducted for SMAP mission. Keywords: Remote Sensing, Soil

  16. Measuring Housework Participation: The Gap between "Stylised" Questionnaire Estimates and Diary-Based Estimates

    ERIC Educational Resources Information Center

    Kan, Man Yee

    2008-01-01

    This article compares stylised (questionnaire-based) estimates and diary-based estimates of housework time collected from the same respondents. Data come from the Home On-line Study (1999-2001), a British national household survey that contains both types of estimates (sample size = 632 men and 666 women). It shows that the gap between the two…

  17. The validity of a web-based FFQ assessed by doubly labelled water and multiple 24-h recalls.

    PubMed

    Medin, Anine C; Carlsen, Monica H; Hambly, Catherine; Speakman, John R; Strohmaier, Susanne; Andersen, Lene F

    2017-12-01

    The aim of this study was to validate the estimated habitual dietary intake from a newly developed web-based FFQ (WebFFQ), for use in an adult population in Norway. In total, ninety-two individuals were recruited. Total energy expenditure (TEE) measured by doubly labelled water was used as the reference method for energy intake (EI) in a subsample of twenty-nine women, and multiple 24-h recalls (24HR) were used as the reference method for the relative validation of macronutrients and food groups in the entire sample. Absolute differences, ratios, crude and deattenuated correlations, cross-classifications, Bland-Altman plot and plots between misreporting of EI (EI-TEE) and the relative misreporting of food groups (WebFFQ-24HR) were used to assess the validity. Results showed that EI on group level was not significantly different from TEE measured by doubly labelled water (0·7 MJ/d), but ranking abilities were poor (r -0·18). The relative validation showed an overestimation for the majority of the variables using absolute intakes, especially for the food groups 'vegetables' and 'fish and shellfish', but an improved agreement between the test and reference tool was observed for energy adjusted intakes. Deattenuated correlation coefficients were between 0·22 and 0·89, and low levels of grossly misclassified individuals (0-3 %) were observed for the majority of the energy adjusted variables for macronutrients and food groups. In conclusion, energy estimates from the WebFFQ should be used with caution, but the estimated absolute intakes on group level and ranking abilities seem acceptable for macronutrients and most food groups.

  18. In Defense of an Instrument-Based Approach to Validity

    ERIC Educational Resources Information Center

    Hood, S. Brian

    2012-01-01

    Paul E. Newton argues in favor of a conception of validity, viz, "the consensus definition of validity," according to which the extension of the predicate "is valid" is a subset of "assessment-based decision-making procedure[s], which [are] underwritten by an argument that the assessment procedure can be used to measure the attribute entailed by…

  19. Integrating indicator-based geostatistical estimation and aquifer vulnerability of nitrate-N for establishing groundwater protection zones

    NASA Astrophysics Data System (ADS)

    Jang, Cheng-Shin; Chen, Shih-Kai

    2015-04-01

    Groundwater nitrate-N contamination occurs frequently in agricultural regions, primarily resulting from surface agricultural activities. The focus of this study is to establish groundwater protection zones based on indicator-based geostatistical estimation and aquifer vulnerability of nitrate-N in the Choushui River alluvial fan in Taiwan. The groundwater protection zones are determined by univariate indicator kriging (IK) estimation, aquifer vulnerability assessment using logistic regression (LR), and integration of the IK estimation and aquifer vulnerability using simple IK with local prior means (sIKlpm). First, according to the statistical significance of source, transport, and attenuation factors dominating the occurrence of nitrate-N pollution, a LR model was adopted to evaluate aquifer vulnerability and to characterize occurrence probability of nitrate-N exceeding 0.5 mg/L. Moreover, the probabilities estimated using LR were regarded as local prior means. IK was then used to estimate the actual extent of nitrate-N pollution. The integration of the IK estimation and aquifer vulnerability was obtained using sIKlpm. Finally, groundwater protection zones were probabilistically determined using the three aforementioned methods, and the estimated accuracy of the delineated groundwater protection zones was gauged using a cross-validation procedure based on observed nitrate-N data. The results reveal that the integration of the IK estimation and aquifer vulnerability using sIKlpm is more robust than univariate IK estimation and aquifer vulnerability assessment using LR for establishing groundwater protection zones. Rigorous management practices for fertilizer use should be implemented in orchards situated in the determined groundwater protection zones.

  20. Validity and Reliability of the Brazilian Version of the Rapid Estimate of Adult Literacy in Dentistry--BREALD-30.

    PubMed

    Junkes, Monica C; Fraiz, Fabian C; Sardenberg, Fernanda; Lee, Jessica Y; Paiva, Saul M; Ferreira, Fernanda M

    2015-01-01

    The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version. After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30) were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes. The BREALD-30 demonstrated good internal reliability. Cronbach's alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect). In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593) and income (rs = 0.327) and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent's perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent's perception regarding his/her child's oral health remained significant in the multivariate analysis. The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil.

  1. Development of water level estimation algorithms using SARAL/Altika dataset and validation over the Ukai reservoir, India

    NASA Astrophysics Data System (ADS)

    Chander, Shard; Ganguly, Debojyoti

    2017-01-01

    Water level was estimated, using AltiKa radar altimeter onboard the SARAL satellite, over the Ukai reservoir using modified algorithms specifically for inland water bodies. The methodology was based on waveform classification, waveform retracking, and dedicated inland range corrections algorithms. The 40-Hz waveforms were classified based on linear discriminant analysis and Bayesian classifier. Waveforms were retracked using Brown, Ice-2, threshold, and offset center of gravity methods. Retracking algorithms were implemented on full waveform and subwaveforms (only one leading edge) for estimating the improvement in the retrieved range. European Centre for Medium-Range Weather Forecasts (ECMWF) operational, ECMWF re-analysis pressure fields, and global ionosphere maps were used to exactly estimate the range corrections. The microwave and optical images were used for estimating the extent of the water body and altimeter track location. Four global positioning system (GPS) field trips were conducted on same day as the SARAL pass using two dual frequency GPS. One GPS was mounted close to the dam in static mode and the other was used on a moving vehicle within the reservoir in Kinematic mode. In situ gauge dataset was provided by the Ukai dam authority for the time period January 1972 to March 2015. The altimeter retrieved water level results were then validated with the GPS survey and in situ gauge dataset. With good selection of virtual station (waveform classification, back scattering coefficient), Ice-2 retracker and subwaveform retracker both work better with an overall root-mean-square error <15 cm. The results support that the AltiKa dataset, due to a smaller foot-print and sharp trailing edge of the Ka-band waveform, can be utilized for more accurate water level information over inland water bodies.

  2. Bayesian model aggregation for ensemble-based estimates of protein pKa values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gosink, Luke J.; Hogan, Emilie A.; Pulsipher, Trenton C.

    2014-03-01

    This paper investigates an ensemble-based technique called Bayesian Model Averaging (BMA) to improve the performance of protein amino acid pmore » $$K_a$$ predictions. Structure-based p$$K_a$$ calculations play an important role in the mechanistic interpretation of protein structure and are also used to determine a wide range of protein properties. A diverse set of methods currently exist for p$$K_a$$ prediction, ranging from empirical statistical models to {\\it ab initio} quantum mechanical approaches. However, each of these methods are based on a set of assumptions that have inherent bias and sensitivities that can effect a model's accuracy and generalizability for p$$K_a$$ prediction in complicated biomolecular systems. We use BMA to combine eleven diverse prediction methods that each estimate pKa values of amino acids in staphylococcal nuclease. These methods are based on work conducted for the pKa Cooperative and the pKa measurements are based on experimental work conducted by the Garc{\\'i}a-Moreno lab. Our study demonstrates that the aggregated estimate obtained from BMA outperforms all individual prediction methods in our cross-validation study with improvements from 40-70\\% over other method classes. This work illustrates a new possible mechanism for improving the accuracy of p$$K_a$$ prediction and lays the foundation for future work on aggregate models that balance computational cost with prediction accuracy.« less

  3. Pseudo CT estimation from MRI using patch-based random forest

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Lei, Yang; Shu, Hui-Kuo; Rossi, Peter; Mao, Hui; Shim, Hyunsuk; Curran, Walter J.; Liu, Tian

    2017-02-01

    Recently, MR simulators gain popularity because of unnecessary radiation exposure of CT simulators being used in radiation therapy planning. We propose a method for pseudo CT estimation from MR images based on a patch-based random forest. Patient-specific anatomical features are extracted from the aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified using feature selection to train the random forest. The well-trained random forest is used to predict the pseudo CT of a new patient. This prediction technique was tested with human brain images and the prediction accuracy was assessed using the original CT images. Peak signal-to-noise ratio (PSNR) and feature similarity (FSIM) indexes were used to quantify the differences between the pseudo and original CT images. The experimental results showed the proposed method could accurately generate pseudo CT images from MR images. In summary, we have developed a new pseudo CT prediction method based on patch-based random forest, demonstrated its clinical feasibility, and validated its prediction accuracy. This pseudo CT prediction technique could be a useful tool for MRI-based radiation treatment planning and attenuation correction in a PET/MRI scanner.

  4. OWL-based reasoning methods for validating archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Entropy-based adaptive attitude estimation

    NASA Astrophysics Data System (ADS)

    Kiani, Maryam; Barzegar, Aylin; Pourtakdoust, Seid H.

    2018-03-01

    Gaussian approximation filters have increasingly been developed to enhance the accuracy of attitude estimation in space missions. The effective employment of these algorithms demands accurate knowledge of system dynamics and measurement models, as well as their noise characteristics, which are usually unavailable or unreliable. An innovation-based adaptive filtering approach has been adopted as a solution to this problem; however, it exhibits two major challenges, namely appropriate window size selection and guaranteed assurance of positive definiteness for the estimated noise covariance matrices. The current work presents two novel techniques based on relative entropy and confidence level concepts in order to address the abovementioned drawbacks. The proposed adaptation techniques are applied to two nonlinear state estimation algorithms of the extended Kalman filter and cubature Kalman filter for attitude estimation of a low earth orbit satellite equipped with three-axis magnetometers and Sun sensors. The effectiveness of the proposed adaptation scheme is demonstrated by means of comprehensive sensitivity analysis on the system and environmental parameters by using extensive independent Monte Carlo simulations.

  6. Validation of State Counts of Handicapped Children. Volume II--Estimation of the Number of Handicapped Children in Each State.

    ERIC Educational Resources Information Center

    Kaskowitz, David H.

    The booklet provides detailed estimates on handicapping conditions for school aged populations. The figures are intended to help the federal government validate state child count data as required by P.L. 94-142, the Education for All Handicapped Children. Section I uncovers the methodology used to arrive at the estimates, and it identifies the…

  7. Evaluation of a moderate resolution, satellite-based impervious surface map using an independent, high-resolution validation data set

    USGS Publications Warehouse

    Jones, J.W.; Jarnagin, T.

    2009-01-01

    Given the relatively high cost of mapping impervious surfaces at regional scales, substantial effort is being expended in the development of moderate-resolution, satellite-based methods for estimating impervious surface area (ISA). To rigorously assess the accuracy of these data products high quality, independently derived validation data are needed. High-resolution data were collected across a gradient of development within the Mid-Atlantic region to assess the accuracy of National Land Cover Data (NLCD) Landsat-based ISA estimates. Absolute error (satellite predicted area - "reference area") and relative error [satellite (predicted area - "reference area")/ "reference area"] were calculated for each of 240 sample regions that are each more than 15 Landsat pixels on a side. The ability to compile and examine ancillary data in a geographic information system environment provided for evaluation of both validation and NLCD data and afforded efficient exploration of observed errors. In a minority of cases, errors could be explained by temporal discontinuities between the date of satellite image capture and validation source data in rapidly changing places. In others, errors were created by vegetation cover over impervious surfaces and by other factors that bias the satellite processing algorithms. On average in the Mid-Atlantic region, the NLCD product underestimates ISA by approximately 5%. While the error range varies between 2 and 8%, this underestimation occurs regardless of development intensity. Through such analyses the errors, strengths, and weaknesses of particular satellite products can be explored to suggest appropriate uses for regional, satellite-based data in rapidly developing areas of environmental significance. ?? 2009 ASCE.

  8. [Soil moisture estimation method based on both ground-based remote sensing data and air temperature in a summer maize ecosystem.

    PubMed

    Wang, Min Zheng; Zhou, Guang Sheng

    2016-06-01

    Soil moisture is an important component of the soil-vegetation-atmosphere continuum (SPAC). It is a key factor to determine the water status of terrestrial ecosystems, and is also the main source of water supply for crops. In order to estimate soil moisture at different soil depths at a station scale, based on the energy balance equation and the water deficit index (WDI), a soil moisture estimation model was established in terms of the remote sensing data (the normalized difference vegetation index and surface temperature) and air temperature. The soil moisture estimation model was validated based on the data from the drought process experiment of summer maize (Zea mays) responding to different irrigation treatments carried out during 2014 at Gucheng eco-agrometeorological experimental station of China Meteorological Administration. The results indicated that the soil moisture estimation model developed in this paper was able to evaluate soil relative humidity at different soil depths in the summer maize field, and the hypothesis was reasonable that evapotranspiration deficit ratio (i.e., WDI) linearly depended on soil relative humidity. It showed that the estimation accuracy of 0-10 cm surface soil moisture was the highest (R 2 =0.90). The RMAEs of the estimated and measured soil relative humidity in deeper soil layers (up to 50 cm) were less than 15% and the RMSEs were less than 20%. The research could provide reference for drought monitoring and irrigation management.

  9. Are traditional body fat equations and anthropometry valid to estimate body fat in children and adolescents living with HIV?

    PubMed

    Lima, Luiz Rodrigo Augustemak de; Martins, Priscila Custódio; Junior, Carlos Alencar Souza Alves; Castro, João Antônio Chula de; Silva, Diego Augusto Santos; Petroski, Edio Luiz

    The aim of this study was to assess the validity of traditional anthropometric equations and to develop predictive equations of total body and trunk fat for children and adolescents living with HIV based on anthropometric measurements. Forty-eight children and adolescents of both sexes (24 boys) aged 7-17 years, living in Santa Catarina, Brazil, participated in the study. Dual-energy X-ray absorptiometry was used as the reference method to evaluate total body and trunk fat. Height, body weight, circumferences and triceps, subscapular, abdominal and calf skinfolds were measured. The traditional equations of Lohman and Slaughter were used to estimate body fat. Multiple regression models were fitted to predict total body fat (Model 1) and trunk fat (Model 2) using a backward selection procedure. Model 1 had an R 2 =0.85 and a standard error of the estimate of 1.43. Model 2 had an R 2 =0.80 and standard error of the estimate=0.49. The traditional equations of Lohman and Slaughter showed poor performance in estimating body fat in children and adolescents living with HIV. The prediction models using anthropometry provided reliable estimates and can be used by clinicians and healthcare professionals to monitor total body and trunk fat in children and adolescents living with HIV. Copyright © 2017 Sociedade Brasileira de Infectologia. Published by Elsevier Editora Ltda. All rights reserved.

  10. Intelligence in Bali--A Case Study on Estimating Mean IQ for a Population Using Various Corrections Based on Theory and Empirical Findings

    ERIC Educational Resources Information Center

    Rindermann, Heiner; te Nijenhuis, Jan

    2012-01-01

    A high-quality estimate of the mean IQ of a country requires giving a well-validated test to a nationally representative sample, which usually is not feasible in developing countries. So, we used a convenience sample and four corrections based on theory and empirical findings to arrive at a good-quality estimate of the mean IQ in Bali. Our study…

  11. Vehicle Position Estimation Based on Magnetic Markers: Enhanced Accuracy by Compensation of Time Delays.

    PubMed

    Byun, Yeun-Sub; Jeong, Rag-Gyo; Kang, Seok-Won

    2015-11-13

    The real-time recognition of absolute (or relative) position and orientation on a network of roads is a core technology for fully automated or driving-assisted vehicles. This paper presents an empirical investigation of the design, implementation, and evaluation of a self-positioning system based on a magnetic marker reference sensing method for an autonomous vehicle. Specifically, the estimation accuracy of the magnetic sensing ruler (MSR) in the up-to-date estimation of the actual position was successfully enhanced by compensating for time delays in signal processing when detecting the vertical magnetic field (VMF) in an array of signals. In this study, the signal processing scheme was developed to minimize the effects of the distortion of measured signals when estimating the relative positional information based on magnetic signals obtained using the MSR. In other words, the center point in a 2D magnetic field contour plot corresponding to the actual position of magnetic markers was estimated by tracking the errors between pre-defined reference models and measured magnetic signals. The algorithm proposed in this study was validated by experimental measurements using a test vehicle on a pilot network of roads. From the results, the positioning error was found to be less than 0.04 m on average in an operational test.

  12. Vehicle Position Estimation Based on Magnetic Markers: Enhanced Accuracy by Compensation of Time Delays

    PubMed Central

    Byun, Yeun-Sub; Jeong, Rag-Gyo; Kang, Seok-Won

    2015-01-01

    The real-time recognition of absolute (or relative) position and orientation on a network of roads is a core technology for fully automated or driving-assisted vehicles. This paper presents an empirical investigation of the design, implementation, and evaluation of a self-positioning system based on a magnetic marker reference sensing method for an autonomous vehicle. Specifically, the estimation accuracy of the magnetic sensing ruler (MSR) in the up-to-date estimation of the actual position was successfully enhanced by compensating for time delays in signal processing when detecting the vertical magnetic field (VMF) in an array of signals. In this study, the signal processing scheme was developed to minimize the effects of the distortion of measured signals when estimating the relative positional information based on magnetic signals obtained using the MSR. In other words, the center point in a 2D magnetic field contour plot corresponding to the actual position of magnetic markers was estimated by tracking the errors between pre-defined reference models and measured magnetic signals. The algorithm proposed in this study was validated by experimental measurements using a test vehicle on a pilot network of roads. From the results, the positioning error was found to be less than 0.04 m on average in an operational test. PMID:26580622

  13. Sensor fusion for structural tilt estimation using an acceleration-based tilt sensor and a gyroscope

    NASA Astrophysics Data System (ADS)

    Liu, Cheng; Park, Jong-Woong; Spencer, B. F., Jr.; Moon, Do-Soo; Fan, Jiansheng

    2017-10-01

    A tilt sensor can provide useful information regarding the health of structural systems. Most existing tilt sensors are gravity/acceleration based and can provide accurate measurements of static responses. However, for dynamic tilt, acceleration can dramatically affect the measured responses due to crosstalk. Thus, dynamic tilt measurement is still a challenging problem. One option is to integrate the output of a gyroscope sensor, which measures the angular velocity, to obtain the tilt; however, problems arise because the low-frequency sensitivity of the gyroscope is poor. This paper proposes a new approach to dynamic tilt measurements, fusing together information from a MEMS-based gyroscope and an acceleration-based tilt sensor. The gyroscope provides good estimates of the tilt at higher frequencies, whereas the acceleration measurements are used to estimate the tilt at lower frequencies. The Tikhonov regularization approach is employed to fuse these measurements together and overcome the ill-posed nature of the problem. The solution is carried out in the frequency domain and then implemented in the time domain using FIR filters to ensure stability. The proposed method is validated numerically and experimentally to show that it performs well in estimating both the pseudo-static and dynamic tilt measurements.

  14. A Practical Strategy for sEMG-Based Knee Joint Moment Estimation During Gait and Its Validation in Individuals With Cerebral Palsy

    PubMed Central

    Kwon, Suncheol; Stanley, Christopher J.; Kim, Jung; Kim, Jonghyun; Damiano, Diane L.

    2013-01-01

    Individuals with cerebral palsy have neurological deficits that may interfere with motor function and lead to abnormal walking patterns. It is important to know the joint moment generated by the patient’s muscles during walking in order to assist the suboptimal gait patterns. In this paper, we describe a practical strategy for estimating the internal moment of a knee joint from surface electromyography (sEMG) and knee joint angle measurements. This strategy requires only isokinetic knee flexion and extension tests to obtain a relationship between the sEMG and the knee internal moment, and it does not necessitate comprehensive laboratory calibration, which typically requires a 3-D motion capture system and ground reaction force plates. Four estimation models were considered based on different assumptions about the functions of the relevant muscles during the isokinetic tests and the stance phase of walking. The performance of the four models was evaluated by comparing the estimated moments with the gold standard internal moment calculated from inverse dynamics. The results indicate that an optimal estimation model can be chosen based on the degree of cocontraction. The estimation error of the chosen model is acceptable (normalized root-mean-squared error: 0.15–0.29, R: 0.71–0.93) compared to previous studies (Doorenbosch and Harlaar, 2003; Doorenbosch and Harlaar, 2004; Doorenbosch, Joosten, and Harlaar, 2005), and this strategy provides a simple and effective solution for estimating knee joint moment from sEMG. PMID:22410952

  15. Error vector magnitude based parameter estimation for digital filter back-propagation mitigating SOA distortions in 16-QAM.

    PubMed

    Amiralizadeh, Siamak; Nguyen, An T; Rusch, Leslie A

    2013-08-26

    We investigate the performance of digital filter back-propagation (DFBP) using coarse parameter estimation for mitigating SOA nonlinearity in coherent communication systems. We introduce a simple, low overhead method for parameter estimation for DFBP based on error vector magnitude (EVM) as a figure of merit. The bit error rate (BER) penalty achieved with this method has negligible penalty as compared to DFBP with fine parameter estimation. We examine different bias currents for two commercial SOAs used as booster amplifiers in our experiments to find optimum operating points and experimentally validate our method. The coarse parameter DFBP efficiently compensates SOA-induced nonlinearity for both SOA types in 80 km propagation of 16-QAM signal at 22 Gbaud.

  16. Model-based verification and validation of the SMAP uplink processes

    NASA Astrophysics Data System (ADS)

    Khan, M. O.; Dubos, G. F.; Tirona, J.; Standley, S.

    Model-Based Systems Engineering (MBSE) is being used increasingly within the spacecraft design community because of its benefits when compared to document-based approaches. As the complexity of projects expands dramatically with continually increasing computational power and technology infusion, the time and effort needed for verification and validation (V& V) increases geometrically. Using simulation to perform design validation with system-level models earlier in the life cycle stands to bridge the gap between design of the system (based on system-level requirements) and verifying those requirements/validating the system as a whole. This case study stands as an example of how a project can validate a system-level design earlier in the project life cycle than traditional V& V processes by using simulation on a system model. Specifically, this paper describes how simulation was added to a system model of the Soil Moisture Active-Passive (SMAP) mission's uplink process. Also discussed are the advantages and disadvantages of the methods employed and the lessons learned; which are intended to benefit future model-based and simulation-based development efforts.

  17. Validity of a Commercial Linear Encoder to Estimate Bench Press 1 RM from the Force-Velocity Relationship.

    PubMed

    Bosquet, Laurent; Porta-Benache, Jeremy; Blais, Jérôme

    2010-01-01

    The aim of this study was to assess the validity and accuracy of a commercial linear encoder (Musclelab, Ergotest, Norway) to estimate Bench press 1 repetition maximum (1RM) from the force - velocity relationship. Twenty seven physical education students and teachers (5 women and 22 men) with a heterogeneous history of strength training participated in this study. They performed a 1 RM test and a force - velocity test using a Bench press lifting task in a random order. Mean 1 RM was 61.8 ± 15.3 kg (range: 34 to 100 kg), while 1 RM estimated by the Musclelab's software from the force-velocity relationship was 56.4 ± 14.0 kg (range: 33 to 91 kg). Actual and estimated 1 RM were very highly correlated (r = 0.93, p<0.001) but largely different (Bias: 5.4 ± 5.7 kg, p < 0.001, ES = 1.37). The 95% limits of agreement were ±11.2 kg, which represented ±18% of actual 1 RM. It was concluded that 1 RM estimated from the force-velocity relationship was a good measure for monitoring training induced adaptations, but also that it was not accurate enough to prescribe training intensities. Additional studies are required to determine whether accuracy is affected by age, sex or initial level. Key pointsSome commercial devices allow to estimate 1 RM from the force-velocity relationship.These estimations are valid. However, their accuracy is not high enough to be of practical help for training intensity prescription.Day-to-day reliability of force and velocity measured by the linear encoder has been shown to be very high, but the specific reliability of 1 RM estimated from the force-velocity relationship has to be determined before concluding to the usefulness of this approach in the monitoring of training induced adaptations.

  18. Validity of a Commercial Linear Encoder to Estimate Bench Press 1 RM from the Force-Velocity Relationship

    PubMed Central

    Bosquet, Laurent; Porta-Benache, Jeremy; Blais, Jérôme

    2010-01-01

    The aim of this study was to assess the validity and accuracy of a commercial linear encoder (Musclelab, Ergotest, Norway) to estimate Bench press 1 repetition maximum (1RM) from the force - velocity relationship. Twenty seven physical education students and teachers (5 women and 22 men) with a heterogeneous history of strength training participated in this study. They performed a 1 RM test and a force - velocity test using a Bench press lifting task in a random order. Mean 1 RM was 61.8 ± 15.3 kg (range: 34 to 100 kg), while 1 RM estimated by the Musclelab’s software from the force-velocity relationship was 56.4 ± 14.0 kg (range: 33 to 91 kg). Actual and estimated 1 RM were very highly correlated (r = 0.93, p<0.001) but largely different (Bias: 5.4 ± 5.7 kg, p < 0.001, ES = 1.37). The 95% limits of agreement were ±11.2 kg, which represented ±18% of actual 1 RM. It was concluded that 1 RM estimated from the force-velocity relationship was a good measure for monitoring training induced adaptations, but also that it was not accurate enough to prescribe training intensities. Additional studies are required to determine whether accuracy is affected by age, sex or initial level. Key points Some commercial devices allow to estimate 1 RM from the force-velocity relationship. These estimations are valid. However, their accuracy is not high enough to be of practical help for training intensity prescription. Day-to-day reliability of force and velocity measured by the linear encoder has been shown to be very high, but the specific reliability of 1 RM estimated from the force-velocity relationship has to be determined before concluding to the usefulness of this approach in the monitoring of training induced adaptations. PMID:24149641

  19. Estimating Population Cause-Specific Mortality Fractions from in-Hospital Mortality: Validation of a New Method

    PubMed Central

    Murray, Christopher J. L; Lopez, Alan D; Barofsky, Jeremy T; Bryson-Cahn, Chloe; Lozano, Rafael

    2007-01-01

    Background Cause-of-death data for many developing countries are not available. Information on deaths in hospital by cause is available in many low- and middle-income countries but is not a representative sample of deaths in the population. We propose a method to estimate population cause-specific mortality fractions (CSMFs) using data already collected in many middle-income and some low-income developing nations, yet rarely used: in-hospital death records. Methods and Findings For a given cause of death, a community's hospital deaths are equal to total community deaths multiplied by the proportion of deaths occurring in hospital. If we can estimate the proportion dying in hospital, we can estimate the proportion dying in the population using deaths in hospital. We propose to estimate the proportion of deaths for an age, sex, and cause group that die in hospital from the subset of the population where vital registration systems function or from another population. We evaluated our method using nearly complete vital registration (VR) data from Mexico 1998–2005, which records whether a death occurred in a hospital. In this validation test, we used 45 disease categories. We validated our method in two ways: nationally and between communities. First, we investigated how the method's accuracy changes as we decrease the amount of Mexican VR used to estimate the proportion of each age, sex, and cause group dying in hospital. Decreasing VR data used for this first step from 100% to 9% produces only a 12% maximum relative error between estimated and true CSMFs. Even if Mexico collected full VR information only in its capital city with 9% of its population, our estimation method would produce an average relative error in CSMFs across the 45 causes of just over 10%. Second, we used VR data for the capital zone (Distrito Federal and Estado de Mexico) and estimated CSMFs for the three lowest-development states. Our estimation method gave an average relative error of 20%, 23

  20. Hierarchical calibration and validation of computational fluid dynamics models for solid sorbent-based carbon capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Canhai; Xu, Zhijie; Pan, Wenxiao

    2016-01-01

    To quantify the predictive confidence of a solid sorbent-based carbon capture design, a hierarchical validation methodology—consisting of basic unit problems with increasing physical complexity coupled with filtered model-based geometric upscaling has been developed and implemented. This paper describes the computational fluid dynamics (CFD) multi-phase reactive flow simulations and the associated data flows among different unit problems performed within the said hierarchical validation approach. The bench-top experiments used in this calibration and validation effort were carefully designed to follow the desired simple-to-complex unit problem hierarchy, with corresponding data acquisition to support model parameters calibrations at each unit problem level. A Bayesianmore » calibration procedure is employed and the posterior model parameter distributions obtained at one unit-problem level are used as prior distributions for the same parameters in the next-tier simulations. Overall, the results have demonstrated that the multiphase reactive flow models within MFIX can be used to capture the bed pressure, temperature, CO2 capture capacity, and kinetics with quantitative accuracy. The CFD modeling methodology and associated uncertainty quantification techniques presented herein offer a solid framework for estimating the predictive confidence in the virtual scale up of a larger carbon capture device.« less

  1. An Optimal-Estimation-Based Aerosol Retrieval Algorithm Using OMI Near-UV Observations

    NASA Technical Reports Server (NTRS)

    Jeong, U; Kim, J.; Ahn, C.; Torres, O.; Liu, X.; Bhartia, P. K.; Spurr, R. J. D.; Haffner, D.; Chance, K.; Holben, B. N.

    2016-01-01

    An optimal-estimation(OE)-based aerosol retrieval algorithm using the OMI (Ozone Monitoring Instrument) near-ultraviolet observation was developed in this study. The OE-based algorithm has the merit of providing useful estimates of errors simultaneously with the inversion products. Furthermore, instead of using the traditional lookup tables for inversion, it performs online radiative transfer calculations with the VLIDORT (linearized pseudo-spherical vector discrete ordinate radiative transfer code) to eliminate interpolation errors and improve stability. The measurements and inversion products of the Distributed Regional Aerosol Gridded Observation Network campaign in northeast Asia (DRAGON NE-Asia 2012) were used to validate the retrieved aerosol optical thickness (AOT) and single scattering albedo (SSA). The retrieved AOT and SSA at 388 nm have a correlation with the Aerosol Robotic Network (AERONET) products that is comparable to or better than the correlation with the operational product during the campaign. The OEbased estimated error represented the variance of actual biases of AOT at 388 nm between the retrieval and AERONET measurements better than the operational error estimates. The forward model parameter errors were analyzed separately for both AOT and SSA retrievals. The surface reflectance at 388 nm, the imaginary part of the refractive index at 354 nm, and the number fine-mode fraction (FMF) were found to be the most important parameters affecting the retrieval accuracy of AOT, while FMF was the most important parameter for the SSA retrieval. The additional information provided with the retrievals, including the estimated error and degrees of freedom, is expected to be valuable for relevant studies. Detailed advantages of using the OE method were described and discussed in this paper.

  2. Patient-specific radiation dose and cancer risk estimation in CT: Part I. Development and validation of a Monte Carlo program

    PubMed Central

    Li, Xiang; Samei, Ehsan; Segars, W. Paul; Sturgeon, Gregory M.; Colsher, James G.; Toncheva, Greta; Yoshizumi, Terry T.; Frush, Donald P.

    2011-01-01

    Purpose: Radiation-dose awareness and optimization in CT can greatly benefit from a dose-reporting system that provides dose and risk estimates specific to each patient and each CT examination. As the first step toward patient-specific dose and risk estimation, this article aimed to develop a method for accurately assessing radiation dose from CT examinations. Methods: A Monte Carlo program was developed to model a CT system (LightSpeed VCT, GE Healthcare). The geometry of the system, the energy spectra of the x-ray source, the three-dimensional geometry of the bowtie filters, and the trajectories of source motions during axial and helical scans were explicitly modeled. To validate the accuracy of the program, a cylindrical phantom was built to enable dose measurements at seven different radial distances from its central axis. Simulated radial dose distributions in the cylindrical phantom were validated against ion chamber measurements for single axial scans at all combinations of tube potential and bowtie filter settings. The accuracy of the program was further validated using two anthropomorphic phantoms (a pediatric one-year-old phantom and an adult female phantom). Computer models of the two phantoms were created based on their CT data and were voxelized for input into the Monte Carlo program. Simulated dose at various organ locations was compared against measurements made with thermoluminescent dosimetry chips for both single axial and helical scans. Results: For the cylindrical phantom, simulations differed from measurements by −4.8% to 2.2%. For the two anthropomorphic phantoms, the discrepancies between simulations and measurements ranged between (−8.1%, 8.1%) and (−17.2%, 13.0%) for the single axial scans and the helical scans, respectively. Conclusions: The authors developed an accurate Monte Carlo program for assessing radiation dose from CT examinations. When combined with computer models of actual patients, the program can provide accurate dose

  3. Policy and Validity Prospects for Performance-Based Assessment.

    ERIC Educational Resources Information Center

    Baker, Eva L.; And Others

    1994-01-01

    This article describes performance-based assessment as expounded by its proponents, comments on these conceptions, reviews evidence regarding the technical quality of performance-based assessment, and considers its validity under various policy options. (JDD)

  4. Development and Validation of a Calculator for Estimating the Probability of Urinary Tract Infection in Young Febrile Children.

    PubMed

    Shaikh, Nader; Hoberman, Alejandro; Hum, Stephanie W; Alberty, Anastasia; Muniz, Gysella; Kurs-Lasky, Marcia; Landsittel, Douglas; Shope, Timothy

    2018-06-01

    Accurately estimating the probability of urinary tract infection (UTI) in febrile preverbal children is necessary to appropriately target testing and treatment. To develop and test a calculator (UTICalc) that can first estimate the probability of UTI based on clinical variables and then update that probability based on laboratory results. Review of electronic medical records of febrile children aged 2 to 23 months who were brought to the emergency department of Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania. An independent training database comprising 1686 patients brought to the emergency department between January 1, 2007, and April 30, 2013, and a validation database of 384 patients were created. Five multivariable logistic regression models for predicting risk of UTI were trained and tested. The clinical model included only clinical variables; the remaining models incorporated laboratory results. Data analysis was performed between June 18, 2013, and January 12, 2018. Documented temperature of 38°C or higher in children aged 2 months to less than 2 years. With the use of culture-confirmed UTI as the main outcome, cutoffs for high and low UTI risk were identified for each model. The resultant models were incorporated into a calculation tool, UTICalc, which was used to evaluate medical records. A total of 2070 children were included in the study. The training database comprised 1686 children, of whom 1216 (72.1%) were female and 1167 (69.2%) white. The validation database comprised 384 children, of whom 291 (75.8%) were female and 200 (52.1%) white. Compared with the American Academy of Pediatrics algorithm, the clinical model in UTICalc reduced testing by 8.1% (95% CI, 4.2%-12.0%) and decreased the number of UTIs that were missed from 3 cases to none. Compared with empirically treating all children with a leukocyte esterase test result of 1+ or higher, the dipstick model in UTICalc would have reduced the number of treatment delays by 10.6% (95% CI

  5. Validation and Uncertainty Estimation of an Ecofriendly and Stability-Indicating HPLC Method for Determination of Diltiazem in Pharmaceutical Preparations

    PubMed Central

    Sadeghi, Fahimeh; Navidpour, Latifeh; Bayat, Sima; Afshar, Minoo

    2013-01-01

    A green, simple, and stability-indicating RP-HPLC method was developed for the determination of diltiazem in topical preparations. The separation was based on a C18 analytical column using a mobile phase consisted of ethanol: phosphoric acid solution (pH = 2.5) (35 : 65, v/v). Column temperature was set at 50°C and quantitation was achieved with UV detection at 240 nm. In forced degradation studies, the drug was subjected to oxidation, hydrolysis, photolysis, and heat. The method was validated for specificity, selectivity, linearity, precision, accuracy, and robustness. The applied procedure was found to be linear in diltiazem concentration range of 0.5–50 μg/mL (r 2 = 0.9996). Precision was evaluated by replicate analysis in which % relative standard deviation (RSD) values for areas were found below 2.0. The recoveries obtained (99.25%–101.66%) ensured the accuracy of the developed method. The degradation products as well as the pharmaceutical excipients were well resolved from the pure drug. The expanded uncertainty (5.63%) of the method was also estimated from method validation data. Accordingly, the proposed validated and sustainable procedure was proved to be suitable for routine analyzing and stability studies of diltiazem in pharmaceutical preparations. PMID:24163778

  6. SDG and qualitative trend based model multiple scale validation

    NASA Astrophysics Data System (ADS)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  7. On the appropriateness of applying chi-square distribution based confidence intervals to spectral estimates of helicopter flyover data

    NASA Technical Reports Server (NTRS)

    Rutledge, Charles K.

    1988-01-01

    The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.

  8. Use of a validated algorithm to estimate the annual cost of effective biologic treatment for rheumatoid arthritis.

    PubMed

    Curtis, Jeffrey R; Schabert, Vernon F; Yeaw, Jason; Korn, Jonathan R; Quach, Caroleen; Harrison, David J; Yun, Huifeng; Joseph, George J; Collier, David

    2014-08-01

    To estimate biologic cost per effectively treated patient with rheumatoid arthritis (RA) using a claims-based algorithm for effectiveness. Patients with RA aged 18-63 years in the IMS PharMetrics Plus database were categorized as effectively treated if they met all six criteria: (1) a medication possession ratio ≥80% (subcutaneous) or at least as many infusions as specified in US labeling (intravenous); (2) no biologic dose increase; (3) no biologic switch; (4) no new non-biologic disease-modifying anti-rheumatic drug; (5) no new or increased oral glucocorticoid; and (6) ≤1 glucocorticoid injection. Biologic cost per effectively treated patient was defined as total cost of the index biologic (drug plus intravenous administration) divided by the number of patients categorized by the algorithm as effectively treated. Similar methods were used for the index biologic in the second year and for a second biologic after a switch. Rates that the index biologic was categorized as effective in the first year were 31.0% etanercept (2243/7247), 28.6% adalimumab (1426/4991), 28.6% abatacept (332/1160), 27.2% golimumab (71/261), and 20.2% infliximab (474/2352). Mean biologic cost per effectively treated patient, per the algorithm, was $50,141 etanercept, $53,386 golimumab, $56,942 adalimumab, $73,516 abatacept, and $114,089 infliximab. Biologic cost per effectively treated patient, using this algorithm, was lower for patients who continued the index biologic in the second year and higher after switching. When a claims-based algorithm was applied to a large commercial claims database, etanercept was categorized as the most effective and had the lowest estimated 1-year biologic cost per effectively treated patient. This proxy for effectiveness from claims databases was validated against a clinical effectiveness scale, but analyses of the second year or the year after a biologic switch were not included in the validation. Costs of other medications were not included in cost

  9. Estimating cardiorespiratory fitness in well-functioning older adults: treadmill validation of the long distance corridor walk.

    PubMed

    Simonsick, Eleanor M; Fan, Ellen; Fleg, Jerome L

    2006-01-01

    To determine criterion validity of the 400-m walk component of the Long Distance Corridor Walk (LDCW) and develop equations for estimating peak oxygen consumption (VO2) from 400-m time and factors intrinsic to test performance (e.g., heart rate (HR) and systolic blood pressure (SBP) response) in older adults. Cross-sectional validation study. Gerontology Research Center, National Institute on Aging, Baltimore, Maryland. Healthy volunteers (56 men and 46 women) aged 60 to 91 participating in the Baltimore Longitudinal Study of Aging between August 1999 and July 2000. The LDCW, consisting of a 2-minute walk followed immediately by a 400-m walk "done as quickly as possible" over a 20-m course was administered the day after maximal treadmill testing. HR and SBP were measured before testing and at the end of the 400-m walk. Weight, height, activity level, perceived effort, and stride length were also acquired. Peak VO2 ranged from 12.2 to 31.1 mL oxygen/kg per minute, and 400-m time ranged from 2 minutes 52 seconds to 6 minutes 18 seconds. Correlation between 400-m time and peak VO2 was -0.79. The estimating equation from linear regression included 400-m time (partial coefficient of determination (R2)=0.625), long versus short stride (partial R2=0.090), ending SBP (partial R2=0.019), and a correction factor for fast 400-m time (<240 seconds; partial R2=0.020) and explained 75.5% of the variance in peak VO2 (correlation coefficient=0.87). A 400-m walk performed as part of the LDCW provides a valid estimate of peak VO2 in older adults. Incorporating low-cost, safe assessments of fitness in clinical and research settings can identify early evidence of physical decline and individuals who may benefit from therapeutic interventions.

  10. Validation of Persian rapid estimate of adult literacy in dentistry.

    PubMed

    Pakpour, Amir H; Lawson, Douglas M; Tadakamadla, Santosh K; Fridlund, Bengt

    2016-05-01

    The aim of the present study was to establish the psychometric properties of the Rapid Estimate of adult Literacy in Dentistry-99 (REALD-99) in the Persian language for use in an Iranian population (IREALD-99). A total of 421 participants with a mean age of 28 years (59% male) were included in the study. Participants included those who were 18 years or older and those residing in Quazvin (a city close to Tehran), Iran. A forward-backward translation process was used for the IREALD-99. The Test of Functional Health Literacy in Dentistry (TOFHLiD) was also administrated. The validity of the IREALD-99 was investigated by comparing the IREALD-99 across the categories of education and income levels. To further investigate, the correlation of IREALD-99 with TOFHLiD was computed. A principal component analysis (PCA) was performed on the data to assess unidimensionality and strong first factor. The Rasch mathematical model was used to evaluate the contribution of each item to the overall measure, and whether the data were invariant to differences in sex. Reliability was estimated with Cronbach's α and test-retest correlation. Cronbach's alpha for the IREALD-99 was 0.98, indicating strong internal consistency. The test-retest correlation was 0.97. IREALD-99 scores differed by education levels. IREALD-99 scores were positively related to TOFHLiD scores (rh = 0.72, P < 0.01). In addition, IREALD-99 showed positive correlation with self-rated oral health status (rh = 0.31, P < 0.01) as evidence of convergent validity. The PCA indicated a strong first component, five times the strength of the second component and nine times the third. The empirical data were a close fit with the Rasch mathematical model. There was not a significant difference in scores with respect to income level (P = 0.09), and only the very lowest income level was significantly different (P < 0.01). The IREALD-99 exhibited excellent reliability on repeated administrations, as well as internal

  11. EEG-based workload estimation across affective contexts

    PubMed Central

    Mühl, Christian; Jeunet, Camille; Lotte, Fabien

    2014-01-01

    Workload estimation from electroencephalographic signals (EEG) offers a highly sensitive tool to adapt the human–computer interaction to the user state. To create systems that reliably work in the complexity of the real world, a robustness against contextual changes (e.g., mood), has to be achieved. To study the resilience of state-of-the-art EEG-based workload classification against stress we devise a novel experimental protocol, in which we manipulated the affective context (stressful/non-stressful) while the participant solved a task with two workload levels. We recorded self-ratings, behavior, and physiology from 24 participants to validate the protocol. We test the capability of different, subject-specific workload classifiers using either frequency-domain, time-domain, or both feature varieties to generalize across contexts. We show that the classifiers are able to transfer between affective contexts, though performance suffers independent of the used feature domain. However, cross-context training is a simple and powerful remedy allowing the extraction of features in all studied feature varieties that are more resilient to task-unrelated variations in signal characteristics. Especially for frequency-domain features, across-context training is leading to a performance comparable to within-context training and testing. We discuss the significance of the result for neurophysiology-based workload detection in particular and for the construction of reliable passive brain–computer interfaces in general. PMID:24971046

  12. Construction of measurement uncertainty profiles for quantitative analysis of genetically modified organisms based on interlaboratory validation data.

    PubMed

    Macarthur, Roy; Feinberg, Max; Bertheau, Yves

    2010-01-01

    A method is presented for estimating the size of uncertainty associated with the measurement of products derived from genetically modified organisms (GMOs). The method is based on the uncertainty profile, which is an extension, for the estimation of uncertainty, of a recent graphical statistical tool called an accuracy profile that was developed for the validation of quantitative analytical methods. The application of uncertainty profiles as an aid to decision making and assessment of fitness for purpose is also presented. Results of the measurement of the quantity of GMOs in flour by PCR-based methods collected through a number of interlaboratory studies followed the log-normal distribution. Uncertainty profiles built using the results generally give an expected range for measurement results of 50-200% of reference concentrations for materials that contain at least 1% GMO. This range is consistent with European Network of GM Laboratories and the European Union (EU) Community Reference Laboratory validation criteria and can be used as a fitness for purpose criterion for measurement methods. The effect on the enforcement of EU labeling regulations is that, in general, an individual analytical result needs to be < 0.45% to demonstrate compliance, and > 1.8% to demonstrate noncompliance with a labeling threshold of 0.9%.

  13. Validation of a plate diagram sheet for estimation of energy and protein intake in hospitalized patients.

    PubMed

    Bjornsdottir, Rannveig; Oskarsdottir, Erna S; Thordardottir, Friða R; Ramel, Alfons; Thorsdottir, Inga; Gunnarsdottir, Ingibjorg

    2013-10-01

    Validation of simple methods for estimating energy and protein intakes in hospital wards are rarely reported in the literature. The aim was to validate a plate diagram sheet for estimation of energy and protein intakes of patients by comparison with weighed food records. Subjects were inpatients at the Cardio Thoracic ward, Landspitali National University Hospital, Reykjavik, Iceland (N = 73). The ward personnel used a plate diagram sheet to record the proportion (0%, 25%, 50%, 100%) of meals consumed by each subjects, for three days. Weighed food records where used as a reference method. On average the plate diagram sheet overestimated energy intake by 45 kcal/day (1119 ± 353 kcal/day versus 1074 ± 360 kcal/day, p = 0.008). Estimation of protein intake was not significantly different between the two methods (50.2 ± 16.4 g/day versus 48.7 ± 17.7 g/day, p = 0.123). By analysing only the meals where ≤50% of the served meals were consumed, according to the plate diagram recording, a slight underestimation was observed. A plate diagram sheet can be used to estimate energy and protein intakes with fair accuracy in hospitalized patients, especially at the group level. Importantly, the plate diagram sheet did not overestimate intakes in patients with a low food intake. Copyright © 2012 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  14. Comparison of machine-learning methods for above-ground biomass estimation based on Landsat imagery

    NASA Astrophysics Data System (ADS)

    Wu, Chaofan; Shen, Huanhuan; Shen, Aihua; Deng, Jinsong; Gan, Muye; Zhu, Jinxia; Xu, Hongwei; Wang, Ke

    2016-07-01

    Biomass is one significant biophysical parameter of a forest ecosystem, and accurate biomass estimation on the regional scale provides important information for carbon-cycle investigation and sustainable forest management. In this study, Landsat satellite imagery data combined with field-based measurements were integrated through comparisons of five regression approaches [stepwise linear regression, K-nearest neighbor, support vector regression, random forest (RF), and stochastic gradient boosting] with two different candidate variable strategies to implement the optimal spatial above-ground biomass (AGB) estimation. The results suggested that RF algorithm exhibited the best performance by 10-fold cross-validation with respect to R2 (0.63) and root-mean-square error (26.44 ton/ha). Consequently, the map of estimated AGB was generated with a mean value of 89.34 ton/ha in northwestern Zhejiang Province, China, with a similar pattern to the distribution mode of local forest species. This research indicates that machine-learning approaches associated with Landsat imagery provide an economical way for biomass estimation. Moreover, ensemble methods using all candidate variables, especially for Landsat images, provide an alternative for regional biomass simulation.

  15. Validation of the Maslach Burnout Inventory-Human Services Survey for Estimating Burnout in Dental Students.

    PubMed

    Montiel-Company, José María; Subirats-Roig, Cristian; Flores-Martí, Pau; Bellot-Arcís, Carlos; Almerich-Silla, José Manuel

    2016-11-01

    The aim of this study was to examine the validity and reliability of the Maslach Burnout Inventory-Human Services Survey (MBI-HSS) as a tool for assessing the prevalence and level of burnout in dental students in Spanish universities. The survey was adapted from English to Spanish. A sample of 533 dental students from 15 Spanish universities and a control group of 188 medical students self-administered the survey online, using the Google Drive service. The test-retest reliability or reproducibility showed an Intraclass Correlation Coefficient of 0.95. The internal consistency of the survey was 0.922. Testing the construct validity showed two components with an eigenvalue greater than 1.5, which explained 51.2% of the total variance. Factor I (36.6% of the variance) comprised the items that estimated emotional exhaustion and depersonalization. Factor II (14.6% of the variance) contained the items that estimated personal accomplishment. The cut-off point for the existence of burnout achieved a sensitivity of 92.2%, a specificity of 92.1%, and an area under the curve of 0.96. Comparison of the total dental students sample and the control group of medical students showed significantly higher burnout levels for the dental students (50.3% vs. 40.4%). In this study, the MBI-HSS was found to be viable, valid, and reliable for measuring burnout in dental students. Since the study also found that the dental students suffered from high levels of this syndrome, these results suggest the need for preventive burnout control programs.

  16. Oxidative DNA damage background estimated by a system model of base excision repair

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokhansanj, B A; Wilson, III, D M

    Human DNA can be damaged by natural metabolism through free radical production. It has been suggested that the equilibrium between innate damage and cellular DNA repair results in an oxidative DNA damage background that potentially contributes to disease and aging. Efforts to quantitatively characterize the human oxidative DNA damage background level based on measuring 8-oxoguanine lesions as a biomarker have led to estimates varying over 3-4 orders of magnitude, depending on the method of measurement. We applied a previously developed and validated quantitative pathway model of human DNA base excision repair, integrating experimentally determined endogenous damage rates and model parametersmore » from multiple sources. Our estimates of at most 100 8-oxoguanine lesions per cell are consistent with the low end of data from biochemical and cell biology experiments, a result robust to model limitations and parameter variation. Our results show the power of quantitative system modeling to interpret composite experimental data and make biologically and physiologically relevant predictions for complex human DNA repair pathway mechanisms and capacity.« less

  17. Method for Estimating Three-Dimensional Knee Rotations Using Two Inertial Measurement Units: Validation with a Coordinate Measurement Machine

    PubMed Central

    Vitali, Rachel V.; Cain, Stephen M.; Zaferiou, Antonia M.; Ojeda, Lauro V.; Perkins, Noel C.

    2017-01-01

    Three-dimensional rotations across the human knee serve as important markers of knee health and performance in multiple contexts including human mobility, worker safety and health, athletic performance, and warfighter performance. While knee rotations can be estimated using optical motion capture, that method is largely limited to the laboratory and small capture volumes. These limitations may be overcome by deploying wearable inertial measurement units (IMUs). The objective of this study is to present a new IMU-based method for estimating 3D knee rotations and to benchmark the accuracy of the results using an instrumented mechanical linkage. The method employs data from shank- and thigh-mounted IMUs and a vector constraint for the medial-lateral axis of the knee during periods when the knee joint functions predominantly as a hinge. The method is carefully validated using data from high precision optical encoders in a mechanism that replicates 3D knee rotations spanning (1) pure flexion/extension, (2) pure internal/external rotation, (3) pure abduction/adduction, and (4) combinations of all three rotations. Regardless of the movement type, the IMU-derived estimates of 3D knee rotations replicate the truth data with high confidence (RMS error < 4° and correlation coefficient r≥0.94). PMID:28846613

  18. Evaluating abundance estimate precision and the assumptions of a count-based index for small mammals

    USGS Publications Warehouse

    Wiewel, A.S.; Adams, A.A.Y.; Rodda, G.H.

    2009-01-01

    Conservation and management of small mammals requires reliable knowledge of population size. We investigated precision of markrecapture and removal abundance estimates generated from live-trapping and snap-trapping data collected at sites on Guam (n 7), Rota (n 4), Saipan (n 5), and Tinian (n 3), in the Mariana Islands. We also evaluated a common index, captures per unit effort (CPUE), as a predictor of abundance. In addition, we evaluated cost and time associated with implementing live-trapping and snap-trapping and compared species-specific capture rates of selected live- and snap-traps. For all species, markrecapture estimates were consistently more precise than removal estimates based on coefficients of variation and 95 confidence intervals. The predictive utility of CPUE was poor but improved with increasing sampling duration. Nonetheless, modeling of sampling data revealed that underlying assumptions critical to application of an index of abundance, such as constant capture probability across space, time, and individuals, were not met. Although snap-trapping was cheaper and faster than live-trapping, the time difference was negligible when site preparation time was considered. Rattus diardii spp. captures were greatest in Haguruma live-traps (Standard Trading Co., Honolulu, HI) and Victor snap-traps (Woodstream Corporation, Lititz, PA), whereas Suncus murinus and Mus musculus captures were greatest in Sherman live-traps (H. B. Sherman Traps, Inc., Tallahassee, FL) and Museum Special snap-traps (Woodstream Corporation). Although snap-trapping and CPUE may have utility after validation against more rigorous methods, validation should occur across the full range of study conditions. Resources required for this level of validation would likely be better allocated towards implementing rigorous and robust methods.

  19. Galileo FOC Satellite Group Delay Estimation based on Raw Method and published IOV Metadata

    NASA Astrophysics Data System (ADS)

    Reckeweg, Florian; Schönemann, Erik; Springer, Tim; Enderle, Werner

    2017-04-01

    In December 2016, the European GNSS Agency (GSA) published the Galileo In-Orbit Validation (IOV) satellite metadata. These metadata include among others the so-called Galileo satellite group delays, which were measured in an absolute sense by the satellite manufacturer on-ground for all three Galileo frequency bands E1, E5 and E6. Therewith Galileo is the first Global Navigation Satellite System (GNSS) for which absolute calibration values for satellite on-board group delays have been published. The different satellite group delays for the three frequency bands lead to the fact that the signals will not be transmitted at exactly the same epoch. Up to now, due to the lack of absolute group delays, it is common practice in GNSS analyses to estimate and apply the differences of these satellite group delays, commonly known as differential code biases (DCBs). However, this has the drawback that the determination of the "raw" clock and the absolute ionosphere is not possible. The use of absolute bias calibrations for satellites and receivers is a major step into the direction of more realistic (in a physical sense) clock and atmosphere estimates. The Navigation Support Office at the European Space Operation Centre (ESOC) was from the beginning involved in the validation process of the Galileo metadata. For the work presented in this presentation we will use the absolute bias calibrations of the Galileo IOV satellites to estimate and validate the absolute receiver group delays of the ESOC GNSS network and vice versa. The receiver group delays have exemplarily been calibrated in a calibration campaign with an IFEN GNSS Signal-Simulator at ESOC. Based on the calibrated network, making use of the ionosphere constraints given by the IOV satellites, GNSS raw observations are processed to estimate satellite group delays for the operational Galileo (Full Operational Capability) FOC satellites. In addition, "raw" satellite clock offsets are estimated, which are free of the

  20. JPL/USC GAIM: Validating COSMIC and Ground-Based GPS Assimilation Results to Estimate Ionospheric Electron Densities

    NASA Astrophysics Data System (ADS)

    Komjathy, A.; Wilson, B.; Akopian, V.; Pi, X.; Mannucci, A.; Wang, C.

    2008-12-01

    tracing applications and trans-ionospheric path delay calibration. In the presentation, we will discuss the expected impact of NRT COSMIC occultation and NRT ground-based measurements and present validation results for ingest of COSMIC data into GAIM using measurements from World Days. We will quality check our COSMIC-derived products by comparing Abel profiles and JPL- processed results. Furthermore, we will validate GAIM assimilation results using Incoherent Scatter Radar measurements from Arecibo, Jicamarca and Millstone Hill datasets. We will conclude by characterizing the improved electron density states using dual-frequency altimeter-derived Jason vertical TEC measurements.

  1. The Validity of Value-Added Estimates from Low-Stakes Testing Contexts: The Impact of Change in Test-Taking Motivation and Test Consequences

    ERIC Educational Resources Information Center

    Finney, Sara J.; Sundre, Donna L.; Swain, Matthew S.; Williams, Laura M.

    2016-01-01

    Accountability mandates often prompt assessment of student learning gains (e.g., value-added estimates) via achievement tests. The validity of these estimates have been questioned when performance on tests is low stakes for students. To assess the effects of motivation on value-added estimates, we assigned students to one of three test consequence…

  2. Validity and reliability of a food frequency questionnaire to estimate dietary intake among Lebanese children.

    PubMed

    Moghames, Patricia; Hammami, Nour; Hwalla, Nahla; Yazbeck, Nadine; Shoaib, Hikma; Nasreddine, Lara; Naja, Farah

    2016-01-12

    Nutritional status during childhood is critical given its effect on growth and development as well as its association with disease risk later in life. The Middle East and North Africa (MENA) region is experiencing alarming rates of childhood malnutrition, both over- and under-nutrition. Hence, there is a need for valid tools to assess dietary intake for children in this region. To date, there are no validated dietary assessment tools for children in any country of the MENA region. The main objective of this study was to examine the validity and reliability of a Food Frequency Questionnaire (FFQ) for the assessment of dietary intake among Lebanese children. Children, aged 5 to 10 years (n = 111), were recruited from public and private schools of Beirut, Lebanon. Mothers (proxies to report their children's dietary intake) completed two FFQs, four weeks apart. Four 24-hour recalls (24-HRs) were collected weekly during the duration of the study. Spearman correlations and Bland-Altman plots were used to assess validity. Linear regression models were used to derive calibration factors for boys and girls. Reproducibility statistics included Intraclass Correlation Coefficient (ICC) and percent agreement. Correlation coefficients between dietary intake estimates derived from FFQ and 24-HRs were significant at p < 0.001 with the highest correlation observed for energy (0.54) and the lowest for monounsaturated fatty acids (0.26). The majority of data points in the Bland-Altman plots lied between the limits of agreement, closer to the middle horizontal line. After applying the calibration factors for boys and girls, the mean energy and nutrient intakes estimated by the FFQ were similar to those obtained by the mean 24-HRs. As for reproducibility, ICC ranged between 0.31 for trans-fatty acids and 0.73 for calcium intakes. Over 80 % of study participants were classified in the same or adjacent quartile of energy and nutrients intake. Findings of this study showed that the

  3. Process-based Cost Estimation for Ramjet/Scramjet Engines

    NASA Technical Reports Server (NTRS)

    Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John

    2003-01-01

    Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.

  4. Validity and Reliability of the Brazilian Version of the Rapid Estimate of Adult Literacy in Dentistry – BREALD-30

    PubMed Central

    Junkes, Monica C.; Fraiz, Fabian C.; Sardenberg, Fernanda; Lee, Jessica Y.; Paiva, Saul M.; Ferreira, Fernanda M.

    2015-01-01

    Objective The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version. Methods After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30) were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes. Results The BREALD-30 demonstrated good internal reliability. Cronbach’s alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect). In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593) and income (rs = 0.327) and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent’s perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent’s perception regarding his/her child's oral health remained significant in the multivariate analysis. Conclusion The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil. PMID:26158724

  5. Validation of Web-Based Physical Activity Measurement Systems Using Doubly Labeled Water

    PubMed Central

    Yamaguchi, Yukio; Yamada, Yosuke; Tokushima, Satoru; Hatamoto, Yoichi; Sagayama, Hiroyuki; Kimura, Misaka; Higaki, Yasuki; Tanaka, Hiroaki

    2012-01-01

    Background Online or Web-based measurement systems have been proposed as convenient methods for collecting physical activity data. We developed two Web-based physical activity systems—the 24-hour Physical Activity Record Web (24hPAR WEB) and 7 days Recall Web (7daysRecall WEB). Objective To examine the validity of two Web-based physical activity measurement systems using the doubly labeled water (DLW) method. Methods We assessed the validity of the 24hPAR WEB and 7daysRecall WEB in 20 individuals, aged 25 to 61 years. The order of email distribution and subsequent completion of the two Web-based measurements systems was randomized. Each measurement tool was used for a week. The participants’ activity energy expenditure (AEE) and total energy expenditure (TEE) were assessed over each week using the DLW method and compared with the respective energy expenditures estimated using the Web-based systems. Results The mean AEE was 3.90 (SD 1.43) MJ estimated using the 24hPAR WEB and 3.67 (SD 1.48) MJ measured by the DLW method. The Pearson correlation for AEE between the two methods was r = .679 (P < .001). The Bland-Altman 95% limits of agreement ranged from –2.10 to 2.57 MJ between the two methods. The Pearson correlation for TEE between the two methods was r = .874 (P < .001). The mean AEE was 4.29 (SD 1.94) MJ using the 7daysRecall WEB and 3.80 (SD 1.36) MJ by the DLW method. The Pearson correlation for AEE between the two methods was r = .144 (P = .54). The Bland-Altman 95% limits of agreement ranged from –3.83 to 4.81 MJ between the two methods. The Pearson correlation for TEE between the two methods was r = .590 (P = .006). The average input times using terminal devices were 8 minutes and 10 seconds for the 24hPAR WEB and 6 minutes and 38 seconds for the 7daysRecall WEB. Conclusions Both Web-based systems were found to be effective methods for collecting physical activity data and are appropriate for use in epidemiological studies. Because the measurement

  6. Long-term monitoring of endangered Laysan ducks: Index validation and population estimates 1998–2012

    USGS Publications Warehouse

    Reynolds, Michelle H.; Courtot, Karen; Brinck, Kevin W.; Rehkemper, Cynthia; Hatfield, Jeffrey

    2015-01-01

    Monitoring endangered wildlife is essential to assessing management or recovery objectives and learning about population status. We tested assumptions of a population index for endangered Laysan duck (or teal; Anas laysanensis) monitored using mark–resight methods on Laysan Island, Hawai’i. We marked 723 Laysan ducks between 1998 and 2009 and identified seasonal surveys through 2012 that met accuracy and precision criteria for estimating population abundance. Our results provide a 15-y time series of seasonal population estimates at Laysan Island. We found differences in detection among seasons and how observed counts related to population estimates. The highest counts and the strongest relationship between count and population estimates occurred in autumn (September–November). The best autumn surveys yielded population abundance estimates that ranged from 674 (95% CI = 619–730) in 2003 to 339 (95% CI = 265–413) in 2012. A population decline of 42% was observed between 2010 and 2012 after consecutive storms and Japan’s To¯hoku earthquake-generated tsunami in 2011. Our results show positive correlations between the seasonal maximum counts and population estimates from the same date, and support the use of standardized bimonthly counts of unmarked birds as a valid index to monitor trends among years within a season at Laysan Island.

  7. Validation of Physical Activity Tracking via Android Smartphones Compared to ActiGraph Accelerometer: Laboratory-Based and Free-Living Validation Studies.

    PubMed

    Hekler, Eric B; Buman, Matthew P; Grieco, Lauren; Rosenberger, Mary; Winter, Sandra J; Haskell, William; King, Abby C

    2015-04-15

    There is increasing interest in using smartphones as stand-alone physical activity monitors via their built-in accelerometers, but there is presently limited data on the validity of this approach. The purpose of this work was to determine the validity and reliability of 3 Android smartphones for measuring physical activity among midlife and older adults. A laboratory (study 1) and a free-living (study 2) protocol were conducted. In study 1, individuals engaged in prescribed activities including sedentary (eg, sitting), light (sweeping), moderate (eg, walking 3 mph on a treadmill), and vigorous (eg, jogging 5 mph on a treadmill) activity over a 2-hour period wearing both an ActiGraph and 3 Android smartphones (ie, HTC MyTouch, Google Nexus One, and Motorola Cliq). In the free-living study, individuals engaged in usual daily activities over 7 days while wearing an Android smartphone (Google Nexus One) and an ActiGraph. Study 1 included 15 participants (age: mean 55.5, SD 6.6 years; women: 56%, 8/15). Correlations between the ActiGraph and the 3 phones were strong to very strong (ρ=.77-.82). Further, after excluding bicycling and standing, cut-point derived classifications of activities yielded a high percentage of activities classified correctly according to intensity level (eg, 78%-91% by phone) that were similar to the ActiGraph's percent correctly classified (ie, 91%). Study 2 included 23 participants (age: mean 57.0, SD 6.4 years; women: 74%, 17/23). Within the free-living context, results suggested a moderate correlation (ie, ρ=.59, P<.001) between the raw ActiGraph counts/minute and the phone's raw counts/minute and a strong correlation on minutes of moderate-to-vigorous physical activity (MVPA; ie, ρ=.67, P<.001). Results from Bland-Altman plots suggested close mean absolute estimates of sedentary (mean difference=-26 min/day of sedentary behavior) and MVPA (mean difference=-1.3 min/day of MVPA) although there was large variation. Overall, results suggest

  8. Comprehensive tire-road friction coefficient estimation based on signal fusion method under complex maneuvering operations

    NASA Astrophysics Data System (ADS)

    Li, L.; Yang, K.; Jia, G.; Ran, X.; Song, J.; Han, Z.-Q.

    2015-05-01

    The accurate estimation of the tire-road friction coefficient plays a significant role in the vehicle dynamics control. The estimation method should be timely and reliable for the controlling requirements, which means the contact friction characteristics between the tire and the road should be recognized before the interference to ensure the safety of the driver and passengers from drifting and losing control. In addition, the estimation method should be stable and feasible for complex maneuvering operations to guarantee the control performance as well. A signal fusion method combining the available signals to estimate the road friction is suggested in this paper on the basis of the estimated ones of braking, driving and steering conditions individually. Through the input characteristics and the states of the vehicle and tires from sensors the maneuvering condition may be recognized, by which the certainty factors of the friction of the three conditions mentioned above may be obtained correspondingly, and then the comprehensive road friction may be calculated. Experimental vehicle tests validate the effectiveness of the proposed method through complex maneuvering operations; the estimated road friction coefficient based on the signal fusion method is relatively timely and accurate to satisfy the control demands.

  9. Validation of the Female Sexual Function Index (FSFI) for web-based administration.

    PubMed

    Crisp, Catrina C; Fellner, Angela N; Pauls, Rachel N

    2015-02-01

    Web-based questionnaires are becoming increasingly valuable for clinical research. The Female Sexual Function Index (FSFI) is the gold standard for evaluating female sexual function; yet, it has not been validated in this format. We sought to validate the Female Sexual Function Index (FSFI) for web-based administration. Subjects enrolled in a web-based research survey of sexual function from the general population were invited to participate in this validation study. The first 151 respondents were included. Validation participants completed the web-based version of the FSFI followed by a mailed paper-based version. Demographic data were collected for all subjects. Scores were compared using the paired t test and the intraclass correlation coefficient. One hundred fifty-one subjects completed both web- and paper-based versions of the FSFI. Those subjects participating in the validation study did not differ in demographics or FSFI scores from the remaining subjects in the general population study. Total web-based and paper-based FSFI scores were not significantly different (mean 20.31 and 20.29 respectively, p = 0.931). The six domains or subscales of the FSFI were similar when comparing web and paper scores. Finally, intraclass correlation analysis revealed a high degree of correlation between total and subscale scores, r = 0.848-0.943, p < 0.001. Web-based administration of the FSFI is a valid alternative to the paper-based version.

  10. Computer aided manual validation of mass spectrometry-based proteomic data.

    PubMed

    Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M

    2013-06-15

    Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Validation of an Innovative Satellite-Based UV Dosimeter

    NASA Astrophysics Data System (ADS)

    Morelli, Marco; Masini, Andrea; Simeone, Emilio; Khazova, Marina

    2016-08-01

    We present an innovative satellite-based UV (ultraviolet) radiation dosimeter with a mobile app interface that has been validated by exploiting both ground-based measurements and an in-vivo assessment of the erythemal effects on some volunteers having a controlled exposure to solar radiation.Both validations showed that the satellite-based UV dosimeter has a good accuracy and reliability needed for health-related applications.The app with this satellite-based UV dosimeter also includes other related functionalities such as the provision of safe sun exposure time updated in real-time and end exposure visual/sound alert. This app will be launched on the global market by siHealth Ltd in May 2016 under the name of "HappySun" and available both for Android and for iOS devices (more info on http://www.happysun.co.uk).Extensive R&D activities are on-going for further improvement of the satellite-based UV dosimeter's accuracy.

  12. Development and validation of PediaTrac™: A web-based tool to track developing infants.

    PubMed

    Lajiness-O'Neill, Renée; Brooks, Judith; Lukomski, Angela; Schilling, Stephen; Huth-Bocks, Alissa; Warschausky, Seth; Flores, Ana-Mercedes; Swick, Casey; Nyman, Tristin; Andersen, Tiffany; Morris, Natalie; Schmitt, Thomas A; Bell-Smith, Jennifer; Moir, Barbara; Hodges, Elise K; Lyddy, James E

    2018-02-01

    PediaTrac™, a 363-item web-based tool to track infant development, administered in modules of ∼40-items per sampling period, newborn (NB), 2--, 4--, 6--, 9-- and 12--months was validated. Caregivers answered demographic, medical, and environmental questions, and questions covering the sensorimotor, feeding/eating, sleep, speech/language, cognition, social-emotional, and attachment domains. Expert Panel Reviews and Cognitive Interviews (CI) were conducted to validate the item bank. Classical Test Theory (CTT) and Item Response Theory (IRT) methods were employed to examine the dimensionality and psychometric properties of PediaTrac with pooled longitudinal and cross-sectional cohorts (N = 132). Intraclass correlation coefficients (ICC) for the Expert Panel Review revealed moderate agreement at 6 -months and good reliability at other sampling periods. ICC estimates for CI revealed moderate reliability regarding clarity of the items at NB and 4 months, good reliability at 2--, 9-- and 12--months and excellent reliability at 6 -months. CTT revealed good coefficient alpha estimates (α ≥ 0.77 for five of the six ages) for the Social-Emotional/Communication, Attachment (α ≥ 0.89 for all ages), and Sensorimotor (α ≥ 0.75 at 6-months) domains, revealing the need for better targeting of sensorimotor items. IRT modeling revealed good reliability (r = 0.85-0.95) for three distinct domains (Feeding/Eating, Social-Emotional/Communication and Attachment) and four subdomains (Feeding Breast/Formula, Feeding Solid Food, Social-Emotional Information Processing, Communication/Cognition). Convergent and discriminant construct validity were demonstrated between our IRT-modeled domains and constructs derived from existing developmental, behavioral and caregiver measures. Our Attachment domain was significantly correlated with existing measures at the NB and 2-month periods, while the Social-Emotional/Communication domain was highly correlated with

  13. Development and validation of a nomogram to estimate the pretest probability of cancer in Chinese patients with solid solitary pulmonary nodules: A multi-institutional study.

    PubMed

    She, Yunlang; Zhao, Lilan; Dai, Chenyang; Ren, Yijiu; Jiang, Gening; Xie, Huikang; Zhu, Huiyuan; Sun, Xiwen; Yang, Ping; Chen, Yongbing; Shi, Shunbin; Shi, Weirong; Yu, Bing; Xie, Dong; Chen, Chang

    2017-11-01

    To develop and validate a nomogram to estimate the pretest probability of malignancy in Chinese patients with solid solitary pulmonary nodule (SPN). A primary cohort of 1798 patients with pathologically confirmed solid SPNs after surgery was retrospectively studied at five institutions from January 2014 to December 2015. A nomogram based on independent prediction factors of malignant solid SPN was developed. Predictive performance also was evaluated using the calibration curve and the area under the receiver operating characteristic curve (AUC). The mean age of the cohort was 58.9 ± 10.7 years. In univariate and multivariate analysis, age; history of cancer; the log base 10 transformations of serum carcinoembryonic antigen value; nodule diameter; the presence of spiculation, pleural indentation, and calcification remained the predictive factors of malignancy. A nomogram was developed, and the AUC value (0.85; 95%CI, 0.83-0.88) was significantly higher than other three models. The calibration cure showed optimal agreement between the malignant probability as predicted by nomogram and the actual probability. We developed and validated a nomogram that can estimate the pretest probability of malignant solid SPNs, which can assist clinical physicians to select and interpret the results of subsequent diagnostic tests. © 2017 Wiley Periodicals, Inc.

  14. Estimating Primary Production of Picophytoplankton Using the Carbon-Based Ocean Productivity Model: A Preliminary Study

    PubMed Central

    Liang, Yantao; Zhang, Yongyu; Wang, Nannan; Luo, Tingwei; Zhang, Yao; Rivkin, Richard B.

    2017-01-01

    Picophytoplankton are acknowledged to contribute significantly to primary production (PP) in the ocean while now the method to measure PP of picophytoplankton (PPPico) at large scales is not yet well established. Although the traditional 14C method and new technologies based on the use of stable isotopes (e.g., 13C) can be employed to accurately measure in situ PPPico, the time-consuming and labor-intensive shortage of these methods constrain their application in a survey on large spatiotemporal scales. To overcome this shortage, a modified carbon-based ocean productivity model (CbPM) is proposed for estimating the PPPico whose principle is based on the group-specific abundance, cellular carbon conversion factor (CCF), and temperature-derived growth rate of picophytoplankton. Comparative analysis showed that the estimated PPPico using CbPM method is significantly and positively related (r2 = 0.53, P < 0.001, n = 171) to the measured 14C uptake. This significant relationship suggests that CbPM has the potential to estimate the PPPico over large spatial and temporal scales. Currently this model application may be limited by the use of invariant cellular CCF and the relatively small data sets to validate the model which may introduce some uncertainties and biases. Model performance will be improved by the use of variable conversion factors and the larger data sets representing diverse growth conditions. Finally, we apply the CbPM-based model on the collected data during four cruises in the Bohai Sea in 2005. Model-estimated PPPico ranged from 0.1 to 11.9, 29.9 to 432.8, 5.5 to 214.9, and 2.4 to 65.8 mg C m-2 d-1 during March, June, September, and December, respectively. This study shed light on the estimation of global PPPico using carbon-based production model. PMID:29051755

  15. Mixed group validation: a method to address the limitations of criterion group validation in research on malingering detection.

    PubMed

    Frederick, R I

    2000-01-01

    Mixed group validation (MGV) is offered as an alternative to criterion group validation (CGV) to estimate the true positive and false positive rates of tests and other diagnostic signs. CGV requires perfect confidence about each research participant's status with respect to the presence or absence of pathology. MGV determines diagnostic efficiencies based on group data; knowing an individual's status with respect to pathology is not required. MGV can use relatively weak indicators to validate better diagnostic signs, whereas CGV requires perfect diagnostic signs to avoid error in computing true positive and false positive rates. The process of MGV is explained, and a computer simulation demonstrates the soundness of the procedure. MGV of the Rey 15-Item Memory Test (Rey, 1958) for 723 pre-trial criminal defendants resulted in higher estimates of true positive rates and lower estimates of false positive rates as compared with prior research conducted with CGV. The author demonstrates how MGV addresses all the criticisms Rogers (1997b) outlined for differential prevalence designs in malingering detection research. Copyright 2000 John Wiley & Sons, Ltd.

  16. A pdf-Free Change Detection Test Based on Density Difference Estimation.

    PubMed

    Bu, Li; Alippi, Cesare; Zhao, Dongbin

    2018-02-01

    The ability to detect online changes in stationarity or time variance in a data stream is a hot research topic with striking implications. In this paper, we propose a novel probability density function-free change detection test, which is based on the least squares density-difference estimation method and operates online on multidimensional inputs. The test does not require any assumption about the underlying data distribution, and is able to operate immediately after having been configured by adopting a reservoir sampling mechanism. Thresholds requested to detect a change are automatically derived once a false positive rate is set by the application designer. Comprehensive experiments validate the effectiveness in detection of the proposed method both in terms of detection promptness and accuracy.

  17. Balancing Score Adjusted Targeted Minimum Loss-based Estimation

    PubMed Central

    Lendle, Samuel David; Fireman, Bruce; van der Laan, Mark J.

    2015-01-01

    Adjusting for a balancing score is sufficient for bias reduction when estimating causal effects including the average treatment effect and effect among the treated. Estimators that adjust for the propensity score in a nonparametric way, such as matching on an estimate of the propensity score, can be consistent when the estimated propensity score is not consistent for the true propensity score but converges to some other balancing score. We call this property the balancing score property, and discuss a class of estimators that have this property. We introduce a targeted minimum loss-based estimator (TMLE) for a treatment-specific mean with the balancing score property that is additionally locally efficient and doubly robust. We investigate the new estimator’s performance relative to other estimators, including another TMLE, a propensity score matching estimator, an inverse probability of treatment weighted estimator, and a regression-based estimator in simulation studies. PMID:26561539

  18. Official statistics and claims data records indicate non-response and recall bias within survey-based estimates of health care utilization in the older population

    PubMed Central

    2013-01-01

    Background The validity of survey-based health care utilization estimates in the older population has been poorly researched. Owing to data protection legislation and a great number of different health care insurance providers, the assessment of recall and non-response bias is challenging to impossible in many countries. The objective of our study was to compare estimates from a population-based study in older German adults with external secondary data. Methods We used data from the German KORA-Age study, which included 4,127 people aged 65–94 years. Self-report questions covered the utilization of long-term care services, inpatient services, outpatient services, and pharmaceuticals. We calculated age- and sex-standardized mean utilization rates in each domain and compared them with the corresponding estimates derived from official statistics and independent statutory health insurance data. Results The KORA-Age study underestimated the use of long-term care services (−52%), in-hospital days (−21%) and physician visits (−70%). In contrast, the assessment of drug consumption by postal self-report questionnaires yielded similar estimates to the analysis of insurance claims data (−9%). Conclusion Survey estimates based on self-report tend to underestimate true health care utilization in the older population. Direct validation studies are needed to disentangle the impact of recall and non-response bias. PMID:23286781

  19. Estimating glomerular filtration rate in diabetes: a comparison of cystatin-C- and creatinine-based methods.

    PubMed

    Macisaac, R J; Tsalamandris, C; Thomas, M C; Premaratne, E; Panagiotopoulos, S; Smith, T J; Poon, A; Jenkins, M A; Ratnaike, S I; Power, D A; Jerums, G

    2006-07-01

    We compared the predictive performance of a GFR based on serum cystatin C levels with commonly used creatinine-based methods in subjects with diabetes. In a cross-sectional study of 251 consecutive clinic patients, the mean reference (plasma clearance of (99m)Tc-diethylene-triamine-penta-acetic acid) GFR (iGFR) was 88+/-2 ml min(-1) 1.73 m(-2). A regression equation describing the relationship between iGFR and 1/cystatin C levels was derived from a test population (n=125) to allow for the estimation of GFR by cystatin C (eGFR-cystatin C). The predictive performance of eGFR-cystatin C, the Modification of Diet in Renal Disease 4 variable formula (MDRD-4) and Cockcroft-Gault (C-G) formulas were then compared in a validation population (n=126). There was no difference in renal function (ml min(-1) 1.73 m(-2)) as measured by iGFR (89.2+/-3.0), eGFR-cystatin C (86.8+/-2.5), MDRD-4 (87.0+/-2.8) or C-G (92.3+/-3.5). All three estimates of renal function had similar precision and accuracy. Estimates of GFR based solely on serum cystatin C levels had the same predictive potential when compared with the MDRD-4 and C-G formulas.

  20. Controlling for Frailty in Pharmacoepidemiologic Studies of Older Adults: Validation of an Existing Medicare Claims-based Algorithm.

    PubMed

    Cuthbertson, Carmen C; Kucharska-Newton, Anna; Faurot, Keturah R; Stürmer, Til; Jonsson Funk, Michele; Palta, Priya; Windham, B Gwen; Thai, Sydney; Lund, Jennifer L

    2018-07-01

    Frailty is a geriatric syndrome characterized by weakness and weight loss and is associated with adverse health outcomes. It is often an unmeasured confounder in pharmacoepidemiologic and comparative effectiveness studies using administrative claims data. Among the Atherosclerosis Risk in Communities (ARIC) Study Visit 5 participants (2011-2013; n = 3,146), we conducted a validation study to compare a Medicare claims-based algorithm of dependency in activities of daily living (or dependency) developed as a proxy for frailty with a reference standard measure of phenotypic frailty. We applied the algorithm to the ARIC participants' claims data to generate a predicted probability of dependency. Using the claims-based algorithm, we estimated the C-statistic for predicting phenotypic frailty. We further categorized participants by their predicted probability of dependency (<5%, 5% to <20%, and ≥20%) and estimated associations with difficulties in physical abilities, falls, and mortality. The claims-based algorithm showed good discrimination of phenotypic frailty (C-statistic = 0.71; 95% confidence interval [CI] = 0.67, 0.74). Participants classified with a high predicted probability of dependency (≥20%) had higher prevalence of falls and difficulty in physical ability, and a greater risk of 1-year all-cause mortality (hazard ratio = 5.7 [95% CI = 2.5, 13]) than participants classified with a low predicted probability (<5%). Sensitivity and specificity varied across predicted probability of dependency thresholds. The Medicare claims-based algorithm showed good discrimination of phenotypic frailty and high predictive ability with adverse health outcomes. This algorithm can be used in future Medicare claims analyses to reduce confounding by frailty and improve study validity.

  1. Validation and reliability of the sex estimation of the human os coxae using freely available DSP2 software for bioarchaeology and forensic anthropology.

    PubMed

    Brůžek, Jaroslav; Santos, Frédéric; Dutailly, Bruno; Murail, Pascal; Cunha, Eugenia

    2017-10-01

    A new tool for skeletal sex estimation based on measurements of the human os coxae is presented using skeletons from a metapopulation of identified adult individuals from twelve independent population samples. For reliable sex estimation, a posterior probability greater than 0.95 was considered to be the classification threshold: below this value, estimates are considered indeterminate. By providing free software, we aim to develop an even more disseminated method for sex estimation. Ten metric variables collected from 2,040 ossa coxa of adult subjects of known sex were recorded between 1986 and 2002 (reference sample). To test both the validity and reliability, a target sample consisting of two series of adult ossa coxa of known sex (n = 623) was used. The DSP2 software (Diagnose Sexuelle Probabiliste v2) is based on Linear Discriminant Analysis, and the posterior probabilities are calculated using an R script. For the reference sample, any combination of four dimensions provides a correct sex estimate in at least 99% of cases. The percentage of individuals for whom sex can be estimated depends on the number of dimensions; for all ten variables it is higher than 90%. Those results are confirmed in the target sample. Our posterior probability threshold of 0.95 for sex estimate corresponds to the traditional sectioning point used in osteological studies. DSP2 software is replacing the former version that should not be used anymore. DSP2 is a robust and reliable technique for sexing adult os coxae, and is also user friendly. © 2017 Wiley Periodicals, Inc.

  2. Risk-based Methodology for Validation of Pharmaceutical Batch Processes.

    PubMed

    Wiles, Frederick

    2013-01-01

    or runs required to demonstrate that a pharmaceutical process is operating in a validated state should be based on sound statistical principles. The old rule of "three consecutive batches and you're done" is no longer sufficient. The guidance, however, does not provide any specific methodology for determining the number of runs required, and little has been published to augment this shortcoming. The paper titled "Risk-based Methodology for Validation of Pharmaceutical Batch Processes" describes a statistically sound methodology for determining when a statistically valid number of validation runs has been acquired based on risk assessment and calculation of process capability.

  3. Field-scale moisture estimates using COSMOS sensors: a validation study with temporary networks and leaf-area-indices

    USDA-ARS?s Scientific Manuscript database

    The Cosmic-ray Soil Moisture Observing System (COSMOS) is a new and innovative method for estimating surface and near surface soil moisture at large (~700 m) scales. This system accounts for liquid water within its measurement volume. Many of the sites used in the early validation of the system had...

  4. Performance assessment of different day-of-the-year-based models for estimating global solar radiation - Case study: Egypt

    NASA Astrophysics Data System (ADS)

    Hassan, Gasser E.; Youssef, M. Elsayed; Ali, Mohamed A.; Mohamed, Zahraa E.; Shehata, Ali I.

    2016-11-01

    Different models are introduced to predict the daily global solar radiation in different locations but there is no specific model based on the day of the year is proposed for many locations around the world. In this study, more than 20 years of measured data for daily global solar radiation on a horizontal surface are used to develop and validate seven models to estimate the daily global solar radiation by day of the year for ten cities around Egypt as a case study. Moreover, the generalization capability for the best models is examined all over the country. The regression analysis is employed to calculate the coefficients of different suggested models. The statistical indicators namely, RMSE, MABE, MAPE, r and R2 are calculated to evaluate the performance of the developed models. Based on the validation with the available data, the results show that the hybrid sine and cosine wave model and 4th order polynomial model have the best performance among other suggested models. Consequently, these two models coupled with suitable coefficients can be used for estimating the daily global solar radiation on a horizontal surface for each city, and also for all the locations around the studied region. It is believed that the established models in this work are applicable and significant for quick estimation for the average daily global solar radiation on a horizontal surface with higher accuracy. The values of global solar radiation generated by this approach can be utilized in the design and estimation of the performance of different solar applications.

  5. On-Road Validation of a Simplified Model for Estimating Real-World Fuel Economy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Eric; Gonder, Jeffrey; Jehlik, Forrest

    On-road fuel economy is known to vary significantly between individual trips in real-world driving conditions. This work introduces a methodology for rapidly simulating a specific vehicle's fuel economy over the wide range of real-world conditions experienced across the country. On-road test data collected using a highly instrumented vehicle is used to refine and validate this modeling approach. Here, model accuracy relative to on-road data collection is relevant to the estimation of 'off-cycle credits' that compensate for real-world fuel economy benefits that are not observed during certification testing on a chassis dynamometer.

  6. On-Road Validation of a Simplified Model for Estimating Real-World Fuel Economy

    DOE PAGES

    Wood, Eric; Gonder, Jeffrey; Jehlik, Forrest

    2017-03-28

    On-road fuel economy is known to vary significantly between individual trips in real-world driving conditions. This work introduces a methodology for rapidly simulating a specific vehicle's fuel economy over the wide range of real-world conditions experienced across the country. On-road test data collected using a highly instrumented vehicle is used to refine and validate this modeling approach. Here, model accuracy relative to on-road data collection is relevant to the estimation of 'off-cycle credits' that compensate for real-world fuel economy benefits that are not observed during certification testing on a chassis dynamometer.

  7. A physics-based fractional order model and state of energy estimation for lithium ion batteries. Part II: Parameter identification and state of energy estimation for LiFePO4 battery

    NASA Astrophysics Data System (ADS)

    Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello

    2017-11-01

    State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.

  8. Republic of Georgia estimates for prevalence of drug use: Randomized response techniques suggest under-estimation.

    PubMed

    Kirtadze, Irma; Otiashvili, David; Tabatadze, Mzia; Vardanashvili, Irina; Sturua, Lela; Zabransky, Tomas; Anthony, James C

    2018-06-01

    Validity of responses in surveys is an important research concern, especially in emerging market economies where surveys in the general population are a novelty, and the level of social control is traditionally higher. The Randomized Response Technique (RRT) can be used as a check on response validity when the study aim is to estimate population prevalence of drug experiences and other socially sensitive and/or illegal behaviors. To apply RRT and to study potential under-reporting of drug use in a nation-scale, population-based general population survey of alcohol and other drug use. For this first-ever household survey on addictive substances for the Country of Georgia, we used the multi-stage probability sampling of 18-to-64-year-old household residents of 111 urban and 49 rural areas. During the interviewer-administered assessments, RRT involved pairing of sensitive and non-sensitive questions about drug experiences. Based upon the standard household self-report survey estimate, an estimated 17.3% [95% confidence interval, CI: 15.5%, 19.1%] of Georgian household residents have tried cannabis. The corresponding RRT estimate was 29.9% [95% CI: 24.9%, 34.9%]. The RRT estimates for other drugs such as heroin also were larger than the standard self-report estimates. We remain unsure about what is the "true" value for prevalence of using illegal psychotropic drugs in the Republic of Georgia study population. Our RRT results suggest that standard non-RRT approaches might produce 'under-estimates' or at best, highly conservative, lower-end estimates. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Reliability and Validity of Inferences about Teachers Based on Student Scores. William H. Angoff Memorial Lecture Series

    ERIC Educational Resources Information Center

    Haertel, Edward H.

    2013-01-01

    Policymakers and school administrators have embraced value-added models of teacher effectiveness as tools for educational improvement. Teacher value-added estimates may be viewed as complicated scores of a certain kind. This suggests using a test validation model to examine their reliability and validity. Validation begins with an interpretive…

  10. Validating a biometric authentication system: sample size requirements.

    PubMed

    Dass, Sarat C; Zhu, Yongfang; Jain, Anil K

    2006-12-01

    Authentication systems based on biometric features (e.g., fingerprint impressions, iris scans, human face images, etc.) are increasingly gaining widespread use and popularity. Often, vendors and owners of these commercial biometric systems claim impressive performance that is estimated based on some proprietary data. In such situations, there is a need to independently validate the claimed performance levels. System performance is typically evaluated by collecting biometric templates from n different subjects, and for convenience, acquiring multiple instances of the biometric for each of the n subjects. Very little work has been done in 1) constructing confidence regions based on the ROC curve for validating the claimed performance levels and 2) determining the required number of biometric samples needed to establish confidence regions of prespecified width for the ROC curve. To simplify the analysis that address these two problems, several previous studies have assumed that multiple acquisitions of the biometric entity are statistically independent. This assumption is too restrictive and is generally not valid. We have developed a validation technique based on multivariate copula models for correlated biometric acquisitions. Based on the same model, we also determine the minimum number of samples required to achieve confidence bands of desired width for the ROC curve. We illustrate the estimation of the confidence bands as well as the required number of biometric samples using a fingerprint matching system that is applied on samples collected from a small population.

  11. Designing and validation of a yoga-based intervention for schizophrenia.

    PubMed

    Govindaraj, Ramajayam; Varambally, Shivarama; Sharma, Manjunath; Gangadhar, Bangalore Nanjundaiah

    2016-06-01

    Schizophrenia is a chronic mental illness which causes significant distress and dysfunction. Yoga has been found to be effective as an add-on therapy in schizophrenia. Modules of yoga used in previous studies were based on individual researcher's experience. This study aimed to develop and validate a specific generic yoga-based intervention module for patients with schizophrenia. The study was conducted at NIMHANS Integrated Centre for Yoga (NICY). A yoga module was designed based on traditional and contemporary yoga literature as well as published studies. The yoga module along with three case vignettes of adult patients with schizophrenia was sent to 10 yoga experts for their validation. Experts (n = 10) gave their opinion on the usefulness of a yoga module for patients with schizophrenia with some modifications. In total, 87% (13 of 15 items) of the items in the initial module were retained, with modification in the remainder as suggested by the experts. A specific yoga-based module for schizophrenia was designed and validated by experts. Further studies are needed to confirm efficacy and clinical utility of the module. Additional clinical validation is suggested.

  12. DES Y1 Results: Validating Cosmological Parameter Estimation Using Simulated Dark Energy Surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacCrann, N.; et al.

    We use mock galaxy survey simulations designed to resemble the Dark Energy Survey Year 1 (DES Y1) data to validate and inform cosmological parameter estimation. When similar analysis tools are applied to both simulations and real survey data, they provide powerful validation tests of the DES Y1 cosmological analyses presented in companion papers. We use two suites of galaxy simulations produced using different methods, which therefore provide independent tests of our cosmological parameter inference. The cosmological analysis we aim to validate is presented in DES Collaboration et al. (2017) and uses angular two-point correlation functions of galaxy number counts and weak lensing shear, as well as their cross-correlation, in multiple redshift bins. While our constraints depend on the specific set of simulated realisations available, for both suites of simulations we find that the input cosmology is consistent with the combined constraints from multiple simulated DES Y1 realizations in themore » $$\\Omega_m-\\sigma_8$$ plane. For one of the suites, we are able to show with high confidence that any biases in the inferred $$S_8=\\sigma_8(\\Omega_m/0.3)^{0.5}$$ and $$\\Omega_m$$ are smaller than the DES Y1 $$1-\\sigma$$ uncertainties. For the other suite, for which we have fewer realizations, we are unable to be this conclusive; we infer a roughly 70% probability that systematic biases in the recovered $$\\Omega_m$$ and $$S_8$$ are sub-dominant to the DES Y1 uncertainty. As cosmological analyses of this kind become increasingly more precise, validation of parameter inference using survey simulations will be essential to demonstrate robustness.« less

  13. Validity and Practitality of Acid-Base Module Based on Guided Discovery Learning for Senior High School

    NASA Astrophysics Data System (ADS)

    Yerimadesi; Bayharti; Jannah, S. M.; Lufri; Festiyed; Kiram, Y.

    2018-04-01

    This Research and Development(R&D) aims to produce guided discovery learning based module on topic of acid-base and determine its validity and practicality in learning. Module development used Four D (4-D) model (define, design, develop and disseminate).This research was performed until development stage. Research’s instruments were validity and practicality questionnaires. Module was validated by five experts (three chemistry lecturers of Universitas Negeri Padang and two chemistry teachers of SMAN 9 Padang). Practicality test was done by two chemistry teachers and 30 students of SMAN 9 Padang. Kappa Cohen’s was used to analyze validity and practicality. The average moment kappa was 0.86 for validity and those for practicality were 0.85 by teachers and 0.76 by students revealing high category. It can be concluded that validity and practicality was proven for high school chemistry learning.

  14. Development of a mechatronic platform and validation of methods for estimating ankle stiffness during the stance phase of walking.

    PubMed

    Rouse, Elliott J; Hargrove, Levi J; Perreault, Eric J; Peshkin, Michael A; Kuiken, Todd A

    2013-08-01

    The mechanical properties of human joints (i.e., impedance) are constantly modulated to precisely govern human interaction with the environment. The estimation of these properties requires the displacement of the joint from its intended motion and a subsequent analysis to determine the relationship between the imposed perturbation and the resultant joint torque. There has been much investigation into the estimation of upper-extremity joint impedance during dynamic activities, yet the estimation of ankle impedance during walking has remained a challenge. This estimation is important for understanding how the mechanical properties of the human ankle are modulated during locomotion, and how those properties can be replicated in artificial prostheses designed to restore natural movement control. Here, we introduce a mechatronic platform designed to address the challenge of estimating the stiffness component of ankle impedance during walking, where stiffness denotes the static component of impedance. The system consists of a single degree of freedom mechatronic platform that is capable of perturbing the ankle during the stance phase of walking and measuring the response torque. Additionally, we estimate the platform's intrinsic inertial impedance using parallel linear filters and present a set of methods for estimating the impedance of the ankle from walking data. The methods were validated by comparing the experimentally determined estimates for the stiffness of a prosthetic foot to those measured from an independent testing machine. The parallel filters accurately estimated the mechatronic platform's inertial impedance, accounting for 96% of the variance, when averaged across channels and trials. Furthermore, our measurement system was found to yield reliable estimates of stiffness, which had an average error of only 5.4% (standard deviation: 0.7%) when measured at three time points within the stance phase of locomotion, and compared to the independently determined

  15. Estimating Escherichia coli loads in streams based on various physical, chemical, and biological factors

    PubMed Central

    Dwivedi, Dipankar; Mohanty, Binayak P.; Lesikar, Bruce J.

    2013-01-01

    Microbes have been identified as a major contaminant of water resources. Escherichia coli (E. coli) is a commonly used indicator organism. It is well recognized that the fate of E. coli in surface water systems is governed by multiple physical, chemical, and biological factors. The aim of this work is to provide insight into the physical, chemical, and biological factors along with their interactions that are critical in the estimation of E. coli loads in surface streams. There are various models to predict E. coli loads in streams, but they tend to be system or site specific or overly complex without enhancing our understanding of these factors. Hence, based on available data, a Bayesian Neural Network (BNN) is presented for estimating E. coli loads based on physical, chemical, and biological factors in streams. The BNN has the dual advantage of overcoming the absence of quality data (with regards to consistency in data) and determination of mechanistic model parameters by employing a probabilistic framework. This study evaluates whether the BNN model can be an effective alternative tool to mechanistic models for E. coli loads estimation in streams. For this purpose, a comparison with a traditional model (LOADEST, USGS) is conducted. The models are compared for estimated E. coli loads based on available water quality data in Plum Creek, Texas. All the model efficiency measures suggest that overall E. coli loads estimations by the BNN model are better than the E. coli loads estimations by the LOADEST model on all the three occasions (three-fold cross validation). Thirteen factors were used for estimating E. coli loads with the exhaustive feature selection technique, which indicated that six of thirteen factors are important for estimating E. coli loads. Physical factors included temperature and dissolved oxygen; chemical factors include phosphate and ammonia; biological factors include suspended solids and chlorophyll. The results highlight that the LOADEST model

  16. Determining the Scoring Validity of a Co-Constructed CEFR-Based Rating Scale

    ERIC Educational Resources Information Center

    Deygers, Bart; Van Gorp, Koen

    2015-01-01

    Considering scoring validity as encompassing both reliable rating scale use and valid descriptor interpretation, this study reports on the validation of a CEFR-based scale that was co-constructed and used by novice raters. The research questions this paper wishes to answer are (a) whether it is possible to construct a CEFR-based rating scale with…

  17. Latency-Based and Psychophysiological Measures of Sexual Interest Show Convergent and Concurrent Validity.

    PubMed

    Ó Ciardha, Caoilte; Attard-Johnson, Janice; Bindemann, Markus

    2018-04-01

    Latency-based measures of sexual interest require additional evidence of validity, as do newer pupil dilation approaches. A total of 102 community men completed six latency-based measures of sexual interest. Pupillary responses were recorded during three of these tasks and in an additional task where no participant response was required. For adult stimuli, there was a high degree of intercorrelation between measures, suggesting that tasks may be measuring the same underlying construct (convergent validity). In addition to being correlated with one another, measures also predicted participants' self-reported sexual interest, demonstrating concurrent validity (i.e., the ability of a task to predict a more validated, simultaneously recorded, measure). Latency-based and pupillometric approaches also showed preliminary evidence of concurrent validity in predicting both self-reported interest in child molestation and viewing pornographic material containing children. Taken together, the study findings build on the evidence base for the validity of latency-based and pupillometric measures of sexual interest.

  18. Validation of Core Temperature Estimation Algorithm

    DTIC Science & Technology

    2016-01-29

    plot of observed versus estimated core temperature with the line of identity (dashed) and the least squares regression line (solid) and line equation...estimated PSI with the line of identity (dashed) and the least squares regression line (solid) and line equation in the top left corner. (b) Bland...for comparison. The root mean squared error (RMSE) was also computed, as given by Equation 2.

  19. Estimation of the spatial validity of local aerosol measurements in Europe using MODIS data

    NASA Astrophysics Data System (ADS)

    Marcos, Carlos; Gómez-Amo, J. Luis; Pedrós, Roberto; Utrillas, M. Pilar; Martínez-Lozano, J. Antonio

    2013-04-01

    The actual impact of atmospheric aerosols in the Earth's radiative budget is still associated to large uncertainties [IPCC, 2007]. Global monitoring of the aerosol properties and distribution in the atmosphere is needed to improve our knowledge of climate change. The instrumentation used for this purpose can be divided into two main groups: ground-based and satellite-based. Ground-based instruments, like lidars or Sun-photometers, are usually designed to measure accurate local properties of atmospheric aerosols throughout the day. However, the spatial validity of these measurements is conditioned by the aerosol variability within the atmosphere. Satellite-based sensors offer spatially resolved information about aerosols at a global scale, but generally with a worse temporal resolution and in a less detailed way. In this work, the aerosol optical depth (AOD) at 550nm from MODIS Aqua, product MYD04 [Remer, 2005], is used to estimate the area of validity of local measurements at different reference points, corresponding to the AERONET [Holben, 1998] stations during the 2011-2012 period in Europe. For each case, the local AOD (AODloc) at each reference point is calculated as the averaged MODIS data within a radius of 15 km. Then, the AODloc is compared to the AOD obtained when a larger averaging radius is used (AOD(r)), up to 500 km. Only those cases where more than 50% of the pixels in each averaging area contain valid data are used. Four factors that could affect the spatial variability of aerosols are studied: proximity to the sea, human activity, aerosol load and geographical location (latitude and longitude). For the 76 reference points studied, which are sited in different regions of Europe, we have determined that the root mean squared difference (RMSD) between AODloc and AOD(r) , averaged for all cases, increases in a logarithmic way with the averaging radius (RMSD ? log(r)), while the linear correlation coefficient (R) decreases following a logarithmic trend

  20. Development and validation of the simulation-based learning evaluation scale.

    PubMed

    Hung, Chang-Chiao; Liu, Hsiu-Chen; Lin, Chun-Chih; Lee, Bih-O

    2016-05-01

    The instruments that evaluate a student's perception of receiving simulated training are English versions and have not been tested for reliability or validity. The aim of this study was to develop and validate a Chinese version Simulation-Based Learning Evaluation Scale (SBLES). Four stages were conducted to develop and validate the SBLES. First, specific desired competencies were identified according to the National League for Nursing and Taiwan Nursing Accreditation Council core competencies. Next, the initial item pool was comprised of 50 items related to simulation that were drawn from the literature of core competencies. Content validity was established by use of an expert panel. Finally, exploratory factor analysis and confirmatory factor analysis were conducted for construct validity, and Cronbach's coefficient alpha determined the scale's internal consistency reliability. Two hundred and fifty students who had experienced simulation-based learning were invited to participate in this study. Two hundred and twenty-five students completed and returned questionnaires (response rate=90%). Six items were deleted from the initial item pool and one was added after an expert panel review. Exploratory factor analysis with varimax rotation revealed 37 items remaining in five factors which accounted for 67% of the variance. The construct validity of SBLES was substantiated in a confirmatory factor analysis that revealed a good fit of the hypothesized factor structure. The findings tally with the criterion of convergent and discriminant validity. The range of internal consistency for five subscales was .90 to .93. Items were rated on a 5-point scale from 1 (strongly disagree) to 5 (strongly agree). The results of this study indicate that the SBLES is valid and reliable. The authors recommend that the scale could be applied in the nursing school to evaluate the effectiveness of simulation-based learning curricula. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Induction machine bearing faults detection based on a multi-dimensional MUSIC algorithm and maximum likelihood estimation.

    PubMed

    Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed

    2016-07-01

    Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Cross Validation of Rain Drop Size Distribution between GPM and Ground Based Polarmetric radar

    NASA Astrophysics Data System (ADS)

    Chandra, C. V.; Biswas, S.; Le, M.; Chen, H.

    2017-12-01

    Dual-frequency precipitation radar (DPR) on board the Global Precipitation Measurement (GPM) core satellite has reflectivity measurements at two independent frequencies, Ku- and Ka- band. Dual-frequency retrieval algorithms have been developed traditionally through forward, backward, and recursive approaches. However, these algorithms suffer from "dual-value" problem when they retrieve medium volume diameter from dual-frequency ratio (DFR) in rain region. To this end, a hybrid method has been proposed to perform raindrop size distribution (DSD) retrieval for GPM using a linear constraint of DSD along rain profile to avoid "dual-value" problem (Le and Chandrasekar, 2015). In the current GPM level 2 algorithm (Iguchi et al. 2017- Algorithm Theoretical Basis Document) the Solver module retrieves a vertical profile of drop size distributionn from dual-frequency observations and path integrated attenuations. The algorithm details can be found in Seto et al. (2013) . On the other hand, ground based polarimetric radars have been used for a long time to estimate drop size distributions (e.g., Gorgucci et al. 2002 ). In addition, coincident GPM and ground based observations have been cross validated using careful overpass analysis. In this paper, we perform cross validation on raindrop size distribution retrieval from three sources, namely the hybrid method, the standard products from the solver module and DSD retrievals from ground polarimetric radars. The results are presented from two NEXRAD radars located in Dallas -Fort Worth, Texas (i.e., KFWS radar) and Melbourne, Florida (i.e., KMLB radar). The results demonstrate the ability of DPR observations to produce DSD estimates, which can be used subsequently to generate global DSD maps. References: Seto, S., T. Iguchi, T. Oki, 2013: The basic performance of a precipitation retrieval algorithm for the Global Precipitation Measurement mission's single/dual-frequency radar measurements. IEEE Transactions on Geoscience and

  3. Wechsler Adult Intelligence Scale-IV Dyads for Estimating Global Intelligence.

    PubMed

    Girard, Todd A; Axelrod, Bradley N; Patel, Ronak; Crawford, John R

    2015-08-01

    All possible two-subtest combinations of the core Wechsler Adult Intelligence Scale-IV (WAIS-IV) subtests were evaluated as possible viable short forms for estimating full-scale IQ (FSIQ). Validity of the dyads was evaluated relative to FSIQ in a large clinical sample (N = 482) referred for neuropsychological assessment. Sample validity measures included correlations, mean discrepancies, and levels of agreement between dyad estimates and FSIQ scores. In addition, reliability and validity coefficients were derived from WAIS-IV standardization data. The Coding + Information dyad had the strongest combination of reliability and validity data. However, several other dyads yielded comparable psychometric performance, albeit with some variability in their particular strengths. We also observed heterogeneity between validity coefficients from the clinical and standardization-based estimates for several dyads. Thus, readers are encouraged to also consider the individual psychometric attributes, their clinical or research goals, and client or sample characteristics when selecting among the dyadic short forms. © The Author(s) 2014.

  4. Repeatability and validity of a field kit for estimation of cholinesterase in whole blood.

    PubMed Central

    London, L; Thompson, M L; Sacks, S; Fuller, B; Bachmann, O M; Myers, J E

    1995-01-01

    OBJECTIVES--To evaluate a spectrophotometric field kit (Test-Mate-OP) for repeatability and validity in comparison with reference laboratory methods and to model its anticipated sensitivity and specificity based on these findings. METHODS--76 farm workers between the age of 20 and 55, of whom 30 were pesticide applicators exposed to a range of organophosphates in the preceding 10 days, had blood taken for plasma cholinesterase (PCE) and erythrocyte cholinesterase (ECE) measurement by field kit or laboratory methods. Paired blinded duplicate samples were taken from subgroups in the sample to assess repeatability of laboratory and field kit methods. Field kits were also used to test venous blood in one subgroup. The variance obtained for the field kit tests was then applied to two hypothetical scenarios that used published action guidelines to model the kit's sensitivity and specificity. RESULTS--Repeatability for PCE was much poorer and for ECE slightly poorer than that of laboratory measures. A substantial upward bias for field kit ECE relative to laboratory measurements was found. Sensitivity of the kit to a 40% drop in PCE was 67%, whereas that for ECE was 89%. Specificity of the kit with no change in mean of the population was 100% for ECE and 91% for PCE. CONCLUSION--Field kit ECE estimation seems to be sufficiently repeatable for surveillance activities, whereas PCE does not. Repeatability of both tests seems to be too low for use in epidemiological dose-response investigations. Further research is indicated to characterise the upward bias in ECE estimation on the kit. PMID:7697143

  5. High-global warming potential F-gas emissions in California: comparison of ambient-based versus inventory-based emission estimates, and implications of refined estimates.

    PubMed

    Gallagher, Glenn; Zhan, Tao; Hsu, Ying-Kuang; Gupta, Pamela; Pederson, James; Croes, Bart; Blake, Donald R; Barletta, Barbara; Meinardi, Simone; Ashford, Paul; Vetter, Arnie; Saba, Sabine; Slim, Rayan; Palandre, Lionel; Clodic, Denis; Mathis, Pamela; Wagner, Mark; Forgie, Julia; Dwyer, Harry; Wolf, Katy

    2014-01-21

    To provide information for greenhouse gas reduction policies, the California Air Resources Board (CARB) inventories annual emissions of high-global-warming potential (GWP) fluorinated gases, the fastest growing sector of greenhouse gas (GHG) emissions globally. Baseline 2008 F-gas emissions estimates for selected chlorofluorocarbons (CFC-12), hydrochlorofluorocarbons (HCFC-22), and hydrofluorocarbons (HFC-134a) made with an inventory-based methodology were compared to emissions estimates made by ambient-based measurements. Significant discrepancies were found, with the inventory-based emissions methodology resulting in a systematic 42% under-estimation of CFC-12 emissions from older refrigeration equipment and older vehicles, and a systematic 114% overestimation of emissions for HFC-134a, a refrigerant substitute for phased-out CFCs. Initial, inventory-based estimates for all F-gas emissions had assumed that equipment is no longer in service once it reaches its average lifetime of use. Revised emission estimates using improved models for equipment age at end-of-life, inventories, and leak rates specific to California resulted in F-gas emissions estimates in closer agreement to ambient-based measurements. The discrepancies between inventory-based estimates and ambient-based measurements were reduced from -42% to -6% for CFC-12, and from +114% to +9% for HFC-134a.

  6. The Model Human Processor and the Older Adult: Parameter Estimation and Validation within a Mobile Phone Task

    ERIC Educational Resources Information Center

    Jastrzembski, Tiffany S.; Charness, Neil

    2007-01-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20;…

  7. Global precipitation estimates based on a technique for combining satellite-based estimates, rain gauge analysis, and NWP model precipitation information

    NASA Technical Reports Server (NTRS)

    Huffman, George J.; Adler, Robert F.; Rudolf, Bruno; Schneider, Udo; Keehn, Peter R.

    1995-01-01

    The 'satellite-gauge model' (SGM) technique is described for combining precipitation estimates from microwave satellite data, infrared satellite data, rain gauge analyses, and numerical weather prediction models into improved estimates of global precipitation. Throughout, monthly estimates on a 2.5 degrees x 2.5 degrees lat-long grid are employed. First, a multisatellite product is developed using a combination of low-orbit microwave and geosynchronous-orbit infrared data in the latitude range 40 degrees N - 40 degrees S (the adjusted geosynchronous precipitation index) and low-orbit microwave data alone at higher latitudes. Then the rain gauge analysis is brougth in, weighting each field by its inverse relative error variance to produce a nearly global, observationally based precipitation estimate. To produce a complete global estimate, the numerical model results are used to fill data voids in the combined satellite-gauge estimate. Our sequential approach to combining estimates allows a user to select the multisatellite estimate, the satellite-gauge estimate, or the full SGM estimate (observationally based estimates plus the model information). The primary limitation in the method is imperfections in the estimation of relative error for the individual fields. The SGM results for one year of data (July 1987 to June 1988) show important differences from the individual estimates, including model estimates as well as climatological estimates. In general, the SGM results are drier in the subtropics than the model and climatological results, reflecting the relatively dry microwave estimates that dominate the SGM in oceanic regions.

  8. Validation of SMAP Surface Soil Moisture Products with Core Validation Sites

    NASA Technical Reports Server (NTRS)

    Colliander, A.; Jackson, T. J.; Bindlish, R.; Chan, S.; Das, N.; Kim, S. B.; Cosh, M. H.; Dunbar, R. S.; Dang, L.; Pashaian, L.; hide

    2017-01-01

    The NASA Soil Moisture Active Passive (SMAP) mission has utilized a set of core validation sites as the primary methodology in assessing the soil moisture retrieval algorithm performance. Those sites provide well calibrated in situ soil moisture measurements within SMAP product grid pixels for diverse conditions and locations.The estimation of the average soil moisture within the SMAP product grid pixels based on in situ measurements is more reliable when location specific calibration of the sensors has been performed and there is adequate replication over the spatial domain, with an up-scaling function based on analysis using independent estimates of the soil moisture distribution. SMAP fulfilled these requirements through a collaborative CalVal Partner program.This paper presents the results from 34 candidate core validation sites for the first eleven months of the SMAP mission. As a result of the screening of the sites prior to the availability of SMAP data, out of the 34 candidate sites 18 sites fulfilled all the requirements at one of the resolution scales (at least). The rest of the sites are used as secondary information in algorithm evaluation. The results indicate that the SMAP radiometer-based soil moisture data product meets its expected performance of 0.04 cu m/cu m volumetric soil moisture (unbiased root mean square error); the combined radar-radiometer product is close to its expected performance of 0.04 cu m/cu m, and the radar-based product meets its target accuracy of 0.06 cu m/cu m (the lengths of the combined and radar-based products are truncated to about 10 weeks because of the SMAP radar failure). Upon completing the intensive CalVal phase of the mission the SMAP project will continue to enhance the products in the primary and extended geographic domains, in co-operation with the CalVal Partners, by continuing the comparisons over the existing core validation sites and inclusion of candidate sites that can address shortcomings.

  9. SU-E-T-769: T-Test Based Prior Error Estimate and Stopping Criterion for Monte Carlo Dose Calculation in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Schuemann, J

    2015-06-15

    Purpose: The Monte Carlo (MC) method is a gold standard for dose calculation in radiotherapy. However, it is not a priori clear how many particles need to be simulated to achieve a given dose accuracy. Prior error estimate and stopping criterion are not well established for MC. This work aims to fill this gap. Methods: Due to the statistical nature of MC, our approach is based on one-sample t-test. We design the prior error estimate method based on the t-test, and then use this t-test based error estimate for developing a simulation stopping criterion. The three major components are asmore » follows.First, the source particles are randomized in energy, space and angle, so that the dose deposition from a particle to the voxel is independent and identically distributed (i.i.d.).Second, a sample under consideration in the t-test is the mean value of dose deposition to the voxel by sufficiently large number of source particles. Then according to central limit theorem, the sample as the mean value of i.i.d. variables is normally distributed with the expectation equal to the true deposited dose.Third, the t-test is performed with the null hypothesis that the difference between sample expectation (the same as true deposited dose) and on-the-fly calculated mean sample dose from MC is larger than a given error threshold, in addition to which users have the freedom to specify confidence probability and region of interest in the t-test based stopping criterion. Results: The method is validated for proton dose calculation. The difference between the MC Result based on the t-test prior error estimate and the statistical Result by repeating numerous MC simulations is within 1%. Conclusion: The t-test based prior error estimate and stopping criterion are developed for MC and validated for proton dose calculation. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  10. Comparison of type 2 diabetes prevalence estimates in Saudi Arabia from a validated Markov model against the International Diabetes Federation and other modelling studies

    PubMed Central

    Al-Quwaidhi, Abdulkareem J.; Pearce, Mark S.; Sobngwi, Eugene; Critchley, Julia A.; O’Flaherty, Martin

    2014-01-01

    Aims To compare the estimates and projections of type 2 diabetes mellitus (T2DM) prevalence in Saudi Arabia from a validated Markov model against other modelling estimates, such as those produced by the International Diabetes Federation (IDF) Diabetes Atlas and the Global Burden of Disease (GBD) project. Methods A discrete-state Markov model was developed and validated that integrates data on population, obesity and smoking prevalence trends in adult Saudis aged ≥25 years to estimate the trends in T2DM prevalence (annually from 1992 to 2022). The model was validated by comparing the age- and sex-specific prevalence estimates against a national survey conducted in 2005. Results Prevalence estimates from this new Markov model were consistent with the 2005 national survey and very similar to the GBD study estimates. Prevalence in men and women in 2000 was estimated by the GBD model respectively at 17.5% and 17.7%, compared to 17.7% and 16.4% in this study. The IDF estimates of the total diabetes prevalence were considerably lower at 16.7% in 2011 and 20.8% in 2030, compared with 29.2% in 2011 and 44.1% in 2022 in this study. Conclusion In contrast to other modelling studies, both the Saudi IMPACT Diabetes Forecast Model and the GBD model directly incorporated the trends in obesity prevalence and/or body mass index (BMI) to inform T2DM prevalence estimates. It appears that such a direct incorporation of obesity trends in modelling studies results in higher estimates of the future prevalence of T2DM, at least in countries where obesity has been rapidly increasing. PMID:24447810

  11. Validity of segmental bioelectrical impedance analysis for estimating fat-free mass in children including overweight individuals.

    PubMed

    Ohta, Megumi; Midorikawa, Taishi; Hikihara, Yuki; Masuo, Yoshihisa; Sakamoto, Shizuo; Torii, Suguru; Kawakami, Yasuo; Fukunaga, Tetsuo; Kanehisa, Hiroaki

    2017-02-01

    This study examined the validity of segmental bioelectrical impedance (BI) analysis for predicting the fat-free masses (FFMs) of whole-body and body segments in children including overweight individuals. The FFM and impedance (Z) values of arms, trunk, legs, and whole body were determined using a dual-energy X-ray absorptiometry and segmental BI analyses, respectively, in 149 boys and girls aged 6 to 12 years, who were divided into model-development (n = 74), cross-validation (n = 35), and overweight (n = 40) groups. Simple regression analysis was applied to (length) 2 /Z (BI index) for each of the whole-body and 3 segments to develop the prediction equations of the measured FFM of the related body part. In the model-development group, the BI index of each of the 3 segments and whole body was significantly correlated to the measured FFM (R 2 = 0.867-0.932, standard error of estimation = 0.18-1.44 kg (5.9%-8.7%)). There was no significant difference between the measured and predicted FFM values without systematic error. The application of each equation derived in the model-development group to the cross-validation and overweight groups did not produce significant differences between the measured and predicted FFM values and systematic errors, with an exception that the arm FFM in the overweight group was overestimated. Segmental bioelectrical impedance analysis is useful for predicting the FFM of each of whole-body and body segments in children including overweight individuals, although the application for estimating arm FFM in overweight individuals requires a certain modification.

  12. Validation of SMAP Root Zone Soil Moisture Estimates with Improved Cosmic-Ray Neutron Probe Observations

    NASA Astrophysics Data System (ADS)

    Babaeian, E.; Tuller, M.; Sadeghi, M.; Franz, T.; Jones, S. B.

    2017-12-01

    Soil Moisture Active Passive (SMAP) soil moisture products are commonly validated based on point-scale reference measurements, despite the exorbitant spatial scale disparity. The difference between the measurement depth of point-scale sensors and the penetration depth of SMAP further complicates evaluation efforts. Cosmic-ray neutron probes (CRNP) with an approximately 500-m radius footprint provide an appealing alternative for SMAP validation. This study is focused on the validation of SMAP level-4 root zone soil moisture products with 9-km spatial resolution based on CRNP observations at twenty U.S. reference sites with climatic conditions ranging from semiarid to humid. The CRNP measurements are often biased by additional hydrogen sources such as surface water, atmospheric vapor, or mineral lattice water, which sometimes yield unrealistic moisture values in excess of the soil water storage capacity. These effects were removed during CRNP data analysis. Comparison of SMAP data with corrected CRNP observations revealed a very high correlation for most of the investigated sites, which opens new avenues for validation of current and future satellite soil moisture products.

  13. SAMICS Validation. SAMICS Support Study, Phase 3

    NASA Technical Reports Server (NTRS)

    1979-01-01

    SAMICS provides a consistent basis for estimating array costs and compares production technology costs. A review and a validation of the SAMICS model are reported. The review had the following purposes: (1) to test the computational validity of the computer model by comparison with preliminary hand calculations based on conventional cost estimating techniques; (2) to review and improve the accuracy of the cost relationships being used by the model: and (3) to provide an independent verification to users of the model's value in decision making for allocation of research and developement funds and for investment in manufacturing capacity. It is concluded that the SAMICS model is a flexible, accurate, and useful tool for managerial decision making.

  14. Validity of two methods for estimation of vertical jump height.

    PubMed

    Dias, Jonathan Ache; Dal Pupo, Juliano; Reis, Diogo C; Borges, Lucas; Santos, Saray G; Moro, Antônio R P; Borges, Noé G

    2011-07-01

    The objectives of this study were (a) to determine the concurrent validity of the flight time (FT) and double integration of vertical reaction force (DIF) methods in the estimation of vertical jump height with the video method (VID) as reference; (b) to verify the degree of agreement among the 3 methods; (c) to propose regression equations to predict the jump height using the FT and DIF. Twenty healthy male and female nonathlete college students participated in this study. The experiment involved positioning a contact mat (CTM) on the force platform (FP), with a video camera 3 m from the FP and perpendicular to the sagittal plane of the subject being assessed. Each participant performed 15 countermovement jumps with 60-second intervals between the trials. Significant differences were found between the jump height obtained by VID and the results with FT (p ≤ 0.01) and DIF (p ≤ 0.01), showing that the methods are not valid. Additionally, the DIF showed a greater degree of agreement with the reference method than the FT did, and both presented a systematic error. From the linear regression test was determined the prediction equations with a high degree of linearity between the methods VID vs. DIF (R = 0.988) and VID vs. FT (R = 0.979). Therefore, the prediction equations suggested may allow coaches to measure the vertical jump performance of athletes by the FT and DIF, using a CTM or an FP, which represents more practical and viable approaches in the sports field; comparisons can then be made with the results of other athletes evaluated by VID.

  15. Applicability of Monte Carlo cross validation technique for model development and validation using generalised least squares regression

    NASA Astrophysics Data System (ADS)

    Haddad, Khaled; Rahman, Ataur; A Zaman, Mohammad; Shrestha, Surendra

    2013-03-01

    SummaryIn regional hydrologic regression analysis, model selection and validation are regarded as important steps. Here, the model selection is usually based on some measurements of goodness-of-fit between the model prediction and observed data. In Regional Flood Frequency Analysis (RFFA), leave-one-out (LOO) validation or a fixed percentage leave out validation (e.g., 10%) is commonly adopted to assess the predictive ability of regression-based prediction equations. This paper develops a Monte Carlo Cross Validation (MCCV) technique (which has widely been adopted in Chemometrics and Econometrics) in RFFA using Generalised Least Squares Regression (GLSR) and compares it with the most commonly adopted LOO validation approach. The study uses simulated and regional flood data from the state of New South Wales in Australia. It is found that when developing hydrologic regression models, application of the MCCV is likely to result in a more parsimonious model than the LOO. It has also been found that the MCCV can provide a more realistic estimate of a model's predictive ability when compared with the LOO.

  16. Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation

    NASA Astrophysics Data System (ADS)

    Sekhar, S. Chandra; Sreenivas, TV

    2004-12-01

    We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).

  17. Validation and Assessment of Three Methods to Estimate 24-h Urinary Sodium Excretion from Spot Urine Samples in Chinese Adults

    PubMed Central

    Peng, Yaguang; Li, Wei; Wang, Yang; Chen, Hui; Bo, Jian; Wang, Xingyu; Liu, Lisheng

    2016-01-01

    24-h urinary sodium excretion is the gold standard for evaluating dietary sodium intake, but it is often not feasible in large epidemiological studies due to high participant burden and cost. Three methods—Kawasaki, INTERSALT, and Tanaka—have been proposed to estimate 24-h urinary sodium excretion from a spot urine sample, but these methods have not been validated in the general Chinese population. This aim of this study was to assess the validity of three methods for estimating 24-h urinary sodium excretion using spot urine samples against measured 24-h urinary sodium excretion in a Chinese sample population. Data are from a substudy of the Prospective Urban Rural Epidemiology (PURE) study that enrolled 120 participants aged 35 to 70 years and collected their morning fasting urine and 24-h urine specimens. Bias calculations (estimated values minus measured values) and Bland-Altman plots were used to assess the validity of the three estimation methods. 116 participants were included in the final analysis. Mean bias for the Kawasaki method was -740 mg/day (95% CI: -1219, 262 mg/day), and was the lowest among the three methods. Mean bias for the Tanaka method was -2305 mg/day (95% CI: -2735, 1875 mg/day). Mean bias for the INTERSALT method was -2797 mg/day (95% CI: -3245, 2349 mg/day), and was the highest of the three methods. Bland-Altman plots indicated that all three methods underestimated 24-h urinary sodium excretion. The Kawasaki, INTERSALT and Tanaka methods for estimation of 24-h urinary sodium excretion using spot urines all underestimated true 24-h urinary sodium excretion in this sample of Chinese adults. Among the three methods, the Kawasaki method was least biased, but was still relatively inaccurate. A more accurate method is needed to estimate the 24-h urinary sodium excretion from spot urine for assessment of dietary sodium intake in China. PMID:26895296

  18. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population

    PubMed Central

    CARVALHO, Suzana Papile Maciel; BRITO, Liz Magalhães; de PAIVA, Luiz Airton Saavedra; BICUDO, Lucilene Arilho Ribeiro; CROSATO, Edgard Michel; de OLIVEIRA, Rogério Nogueira

    2013-01-01

    Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. Objective This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. Material and Methods The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. Results The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. Conclusion It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological

  19. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population.

    PubMed

    Carvalho, Suzana Papile Maciel; Brito, Liz Magalhães; Paiva, Luiz Airton Saavedra de; Bicudo, Lucilene Arilho Ribeiro; Crosato, Edgard Michel; Oliveira, Rogério Nogueira de

    2013-01-01

    Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological adjustment is recommended as demonstrated in this

  20. Validation of the TOPEX rain algorithm: Comparison with ground-based radar

    NASA Astrophysics Data System (ADS)

    McMillan, A. C.; Quartly, G. D.; Srokosz, M. A.; Tournadre, J.

    2002-02-01

    Recently developed algorithms have shown the potential recovery of rainfall information from spaceborne dual-frequency altimeters. Given the long mission achieved with TOPEX and the prospect of several other dual-frequency altimeters, we need to validate the altimetrically derived values so as to foster their integration with rain information from different sensors. Comparison with some alternative climatologies shows the bimonthly means for TOPEX to be low. Rather than apply a bulk correction we investigate individual rain events to understand the cause of TOPEX's underestimation. In this paper we compare TOPEX with near-simultaneous ground-based rain radars based at a number of locations, examining both the detection of rain and the quantitative values inferred. The altimeter-only algorithm is found to flag false rain events in very low wind states (<3.8 m s-1) the application of an extra test, involving the liquid water path as sensed by the microwave radiometer, removes the spurious detections. Some false detections of rain also occur at high wind speeds (>20 m s-1), where the empirical dual-frequency relationship is less well defined. In the intermediate range of wind speeds, the TOPEX detections are usually good, with the instrument picking up small-scale variations that cannot be recovered from infrared or passive microwave techniques. The magnitude of TOPEX's rain retrievals can differ by a factor of 2 from the ground-based radar, but this may reflect the uncertainties in the validation data. In general, over these individual point comparisons TOPEX values appear to exceed the ``ground truth.'' Taking account of all the factors affecting the comparisons, we conclude that the TOPEX climatology could be improved by, in the first instance, incorporating the radiometric test and employing a better estimate of the melting layer height. Appropriate corrections for nonuniform beam filling and drizzle fraction are harder to define globally.

  1. Estimating Anesthesia Time Using the Medicare Claim: A Validation Study

    PubMed Central

    Silber, Jeffrey H.; Rosenbaum, Paul R.; Even-Shoshan, Orit; Mi, Lanyu; Kyle, Fabienne; Teng, Yun; Bratzler, Dale W.; Fleisher, Lee A.

    2012-01-01

    Introduction Procedure length is a fundamental variable associated with quality of care, though seldom studied on a large scale. We sought to estimate procedure length through information obtained in the anesthesia claim submitted to Medicare to validate this method for future studies. Methods The Obesity and Surgical Outcomes Study enlisted 47 hospitals located across New York, Texas and Illinois to study patients undergoing hip, knee, colon and thoracotomy procedures. 15,914 charts were abstracted to determine body mass index and initial patient physiology. Included in this abstraction were induction, cut, close and recovery room times. This chart information was merged to Medicare claims which included anesthesia Part B billing information. Correlations between chart times and claim times were analyzed, models developed, and median absolute differences in minutes calculated. Results Of the 15,914 eligible patients, there were 14,369 where both chart and claim times were available for analysis. In these 14,369, the Spearman correlation between chart and claim time was 0.94 (95% CI 0.94, 0.95) and the median absolute difference between chart and claim time was only 5 minutes (95% CI: 5.0, 5.5). The anesthesia claim can also be used to estimate surgical procedure length, with only a modest increase in error. Conclusion The anesthesia bill found in Medicare claims provides an excellent source of information for studying operative time on a vast scale throughout the United States. However, errors in both chart abstraction and anesthesia claims can occur. Care must be taken in the handling of outliers in this data. PMID:21720242

  2. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  3. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  4. Age validation of canary rockfish (Sebastes pinniger) using two independent otolith techniques: lead-radium and bomb radiocarbon dating.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, A H; Kerr, L A; Cailliet, G M

    2007-11-04

    Canary rockfish (Sebastes pinniger) have long been an important part of recreational and commercial rockfish fishing from southeast Alaska to southern California, but localized stock abundances have declined considerably. Based on age estimates from otoliths and other structures, lifespan estimates vary from about 20 years to over 80 years. For the purpose of monitoring stocks, age composition is routinely estimated by counting growth zones in otoliths; however, age estimation procedures and lifespan estimates remain largely unvalidated. Typical age validation techniques have limited application for canary rockfish because they are deep dwelling and may be long lived. In this study, themore » unaged otolith of the pair from fish aged at the Department of Fisheries and Oceans Canada was used in one of two age validation techniques: (1) lead-radium dating and (2) bomb radiocarbon ({sup 14}C) dating. Age estimate accuracy and the validity of age estimation procedures were validated based on the results from each technique. Lead-radium dating proved successful in determining a minimum estimate of lifespan was 53 years and provided support for age estimation procedures up to about 50-60 years. These findings were further supported by {Delta}{sup 14}C data, which indicated a minimum estimate of lifespan was 44 {+-} 3 years. Both techniques validate, to differing degrees, age estimation procedures and provide support for inferring that canary rockfish can live more than 80 years.« less

  5. Estimating body fat in NCAA Division I female athletes: a five-compartment model validation of laboratory methods.

    PubMed

    Moon, Jordan R; Eckerson, Joan M; Tobkin, Sarah E; Smith, Abbie E; Lockwood, Christopher M; Walter, Ashley A; Cramer, Joel T; Beck, Travis W; Stout, Jeffrey R

    2009-01-01

    The purpose of the present study was to determine the validity of various laboratory methods for estimating percent body fat (%fat) in NCAA Division I college female athletes (n = 29; 20 +/- 1 year). Body composition was assessed via hydrostatic weighing (HW), air displacement plethysmography (ADP), and dual-energy X-ray absorptiometry (DXA), and estimates of %fat derived using 4-compartment (C), 3C, and 2C models were compared to a criterion 5C model that included bone mineral content, body volume (BV), total body water, and soft tissue mineral. The Wang-4C and the Siri-3C models produced nearly identical values compared to the 5C model (r > 0.99, total error (TE) < 0.40%fat). For the remaining laboratory methods, constant error values (CE) ranged from -0.04%fat (HW-Siri) to -3.71%fat (DXA); r values ranged from 0.89 (ADP-Siri, ADP-Brozek) to 0.93 (DXA); standard error of estimate values ranged from 1.78%fat (DXA) to 2.19%fat (ADP-Siri, ADP-Brozek); and TE values ranged from 2.22%fat (HW-Brozek) to 4.90%fat (DXA). The limits of agreement for DXA (-10.10 to 2.68%fat) were the largest with a significant trend of -0.43 (P < 0.05). With the exception of DXA, all of the equations resulted in acceptable TE values (<3.08%fat). However, the results for individual estimates of %fat using the Brozek equation indicated that the 2C models that derived BV from ADP and HW overestimated (5.38, 3.65%) and underestimated (5.19, 4.88%) %fat, respectively. The acceptable TE values for both HW and ADP suggest that these methods are valid for estimating %fat in college female athletes; however, the Wang-4C and Siri-3C models should be used to identify individual estimates of %fat in this population.

  6. A new approach on seismic mortality estimations based on average population density

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaoxin; Sun, Baiqing; Jin, Zhanyong

    2016-12-01

    This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the population density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.

  7. Validity of a food frequency questionnaire to estimate long-chain polyunsaturated fatty acid intake among Japanese women in early and late pregnancy.

    PubMed

    Kobayashi, Minatsu; Jwa, Seung Chik; Ogawa, Kohei; Morisaki, Naho; Fujiwara, Takeo

    2017-01-01

    The relative validity of food frequency questionnaires for estimating long-chain polyunsaturated fatty acid (LC-PUFA) intake among pregnant Japanese women is currently unclear. The aim of this study was to verify the external validity of a food frequency questionnaire, originally developed for non-pregnant adults, to assess the dietary intake of LC-PUFA using dietary records and serum phospholipid levels among Japanese women in early and late pregnancy. A validation study involving 188 participants in early pregnancy and 169 participants in late pregnancy was conducted. Intake LC-PUFA was estimated using a food frequency questionnaire and evaluated using a 3-day dietary record and serum phospholipid concentrations in both early and late pregnancy. The food frequency questionnaire provided estimates of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) intake with higher precision than dietary records in both early and late pregnancy. Significant correlations were observed for LC-PUFA intake estimated using dietary records in both early and late pregnancy, particularly for EPA and DHA (correlation coefficients ranged from 0.34 to 0.40, p < 0.0001). Similarly, high correlations for EPA and DHA in serum phospholipid composition were also observed in both early and late pregnancy (correlation coefficients ranged 0.27 to 0.34, p < 0.0001). Our findings suggest that the food frequency questionnaire, which was originally designed for non-pregnant adults and was evaluated in this study against dietary records and biological markers, has good validity for assessing LC-PUFA intake, especially EPA and DHA intake, among Japanese women in early and late pregnancy. Copyright © 2016 The Authors. Production and hosting by Elsevier B.V. All rights reserved.

  8. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model.

    PubMed

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model

    NASA Astrophysics Data System (ADS)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.

  10. SEE rate estimation based on diffusion approximation of charge collection

    NASA Astrophysics Data System (ADS)

    Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.

    2018-03-01

    The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.

  11. Validation of OMI erythemal doses with multi-sensor ground-based measurements in Thessaloniki, Greece

    NASA Astrophysics Data System (ADS)

    Zempila, Melina Maria; Fountoulakis, Ilias; Taylor, Michael; Kazadzis, Stelios; Arola, Antti; Koukouli, Maria Elissavet; Bais, Alkiviadis; Meleti, Chariklia; Balis, Dimitrios

    2018-06-01

    The aim of this study is to validate the Ozone Monitoring Instrument (OMI) erythemal dose rates using ground-based measurements in Thessaloniki, Greece. In the Laboratory of Atmospheric Physics of the Aristotle University of Thessaloniki, a Yankee Environmental System UVB-1 radiometer measures the erythemal dose rates every minute, and a Norsk Institutt for Luftforskning (NILU) multi-filter radiometer provides multi-filter based irradiances that were used to derive erythemal dose rates for the period 2005-2014. Both these datasets were independently validated against collocated UV irradiance spectra from a Brewer MkIII spectrophotometer. Cloud detection was performed based on measurements of the global horizontal radiation from a Kipp & Zonen pyranometer and from NILU measurements in the visible range. The satellite versus ground observation validation was performed taking into account the effect of temporal averaging, limitations related to OMI quality control criteria, cloud conditions, the solar zenith angle and atmospheric aerosol loading. Aerosol optical depth was also retrieved using a collocated CIMEL sunphotometer in order to assess its impact on the comparisons. The effect of total ozone columns satellite versus ground-based differences on the erythemal dose comparisons was also investigated. Since most of the public awareness alerts are based on UV Index (UVI) classifications, an analysis and assessment of OMI capability for retrieving UVIs was also performed. An overestimation of the OMI erythemal product by 3-6% and 4-8% with respect to ground measurements is observed when examining overpass and noontime estimates respectively. The comparisons revealed a relatively small solar zenith angle dependence, with the OMI data showing a slight dependence on aerosol load, especially at high aerosol optical depth values. A mean underestimation of 2% in OMI total ozone columns under cloud-free conditions was found to lead to an overestimation in OMI erythemal

  12. Validating precision estimates in horizontal wind measurements from a Doppler lidar

    DOE PAGES

    Newsom, Rob K.; Brewer, W. Alan; Wilczak, James M.; ...

    2017-03-30

    Results from a recent field campaign are used to assess the accuracy of wind speed and direction precision estimates produced by a Doppler lidar wind retrieval algorithm. The algorithm, which is based on the traditional velocity-azimuth-display (VAD) technique, estimates the wind speed and direction measurement precision using standard error propagation techniques, assuming the input data (i.e., radial velocities) to be contaminated by random, zero-mean, errors. For this study, the lidar was configured to execute an 8-beam plan-position-indicator (PPI) scan once every 12 min during the 6-week deployment period. Several wind retrieval trials were conducted using different schemes for estimating themore » precision in the radial velocity measurements. Here, the resulting wind speed and direction precision estimates were compared to differences in wind speed and direction between the VAD algorithm and sonic anemometer measurements taken on a nearby 300 m tower.« less

  13. Validity of near-infrared interactance (FUTREX 6100/XL) for estimating body fat percentage in elite rowers.

    PubMed

    Fukuda, David H; Wray, Mandy E; Kendall, Kristina L; Smith-Ryan, Abbie E; Stout, Jeffrey R

    2017-07-01

    This investigation aimed to compare hydrostatic weighing (HW) with near-infrared interactance (NIR) and skinfold measurements (SKF) in estimating body fat percentage (FAT%) in rowing athletes. FAT% was estimated in 20 elite male rowers (mean ± SD: age = 24·8 ± 2·2 years, height = 191·0 ± 6·8 cm, weight = 86·8 ± 11·3 kg, HW FAT% = 11·50 ± 3·16%) using HW with residual volume, 3-site SKF and NIR on the biceps brachii. Predicted FAT% values for NIR and SKF were validated against the criterion method of HW. Constant error was not significant for NIR (-0·06, P = 0·955) or SKF (-0·20, P = 0·813). Neither NIR (r = 0·045) nor SKF (r = 0·229) demonstrated significant validity coefficients when compared to HW. The standard error of the estimate values for NIR and SKF were both less than 3·5%, while total error was 4·34% and 3·60%, respectively. When compared to HW, SKF and NIR provide similar mean values when compared to HW, but the lack of apparent relationships between individual values and borderline unacceptable total error may limit their application in this population. © 2015 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  14. Simulation of fMRI signals to validate dynamic causal modeling estimation

    NASA Astrophysics Data System (ADS)

    Anandwala, Mobin; Siadat, Mohamad-Reza; Hadi, Shamil M.

    2012-03-01

    Through cognitive tasks certain brain areas are activated and also receive increased blood to them. This is modeled through a state system consisting of two separate parts one that deals with the neural node stimulation and the other blood response during that stimulation. The rationale behind using this state system is to validate existing analysis methods such as DCM to see what levels of noise they can handle. Using the forward Euler's method this system was approximated in a series of difference equations. What was obtained was the hemodynamic response for each brain area and this was used to test an analysis tool to estimate functional connectivity between each brain area with a given amount of noise. The importance of modeling this system is to not only have a model for neural response but also to compare to actual data obtained through functional imaging scans.

  15. Development and Validation of a Lifecycle-based Prognostics Architecture with Test Bed Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hines, J. Wesley; Upadhyaya, Belle; Sharp, Michael

    On-line monitoring and tracking of nuclear plant system and component degradation is being investigated as a method for improving the safety, reliability, and maintainability of aging nuclear power plants. Accurate prediction of the current degradation state of system components and structures is important for accurate estimates of their remaining useful life (RUL). The correct quantification and propagation of both the measurement uncertainty and model uncertainty is necessary for quantifying the uncertainty of the RUL prediction. This research project developed and validated methods to perform RUL estimation throughout the lifecycle of plant components. Prognostic methods should seamlessly operate from beginning ofmore » component life (BOL) to end of component life (EOL). We term this "Lifecycle Prognostics." When a component is put into use, the only information available may be past failure times of similar components used in similar conditions, and the predicted failure distribution can be estimated with reliability methods such as Weibull Analysis (Type I Prognostics). As the component operates, it begins to degrade and consume its available life. This life consumption may be a function of system stresses, and the failure distribution should be updated to account for the system operational stress levels (Type II Prognostics). When degradation becomes apparent, this information can be used to again improve the RUL estimate (Type III Prognostics). This research focused on developing prognostics algorithms for the three types of prognostics, developing uncertainty quantification methods for each of the algorithms, and, most importantly, developing a framework using Bayesian methods to transition between prognostic model types and update failure distribution estimates as new information becomes available. The developed methods were then validated on a range of accelerated degradation test beds. The ultimate goal of prognostics is to provide an accurate

  16. Validity of photographs for food portion estimation in a rural West African setting.

    PubMed

    Huybregts, L; Roberfroid, D; Lachat, C; Van Camp, J; Kolsteren, P

    2008-06-01

    To validate food photographs for food portion size estimation of frequently consumed dishes, to be used in a 24-hour recall food consumption study of pregnant women in a rural environment in Burkina Faso. This food intake study is part of an intervention evaluating the efficacy of prenatal micronutrient supplementation on birth outcomes. Women of childbearing age (15-45 years). A food photograph album containing four photographs of food portions per food item was compiled for eight selected food items. Subjects were presented two food items each in the morning and two in the afternoon. These foods were weighed to the exact weight of a food depicted in one of the photographs and were in the same receptacles. The next day another fieldworker presented the food photographs to the subjects to test their ability to choose the correct photograph. The correct photograph out of the four proposed was chosen in 55% of 1028 estimations. For each food, proportions of underestimating and overestimating participants were balanced, except for rice and couscous. On a group level, mean differences between served and estimated portion sizes were between -8.4% and 6.3%. Subjects who attended school were almost twice as likely to choose the correct photograph. The portion size served (small vs. largest sizes) had a significant influence on the portion estimation ability. The results from this study indicate that in a West African rural setting, food photographs can be a valuable tool for the quantification of food portion size on group level.

  17. How Valid are Estimates of Occupational Illness?

    ERIC Educational Resources Information Center

    Hilaski, Harvey J.; Wang, Chao Ling

    1982-01-01

    Examines some of the methods of estimating occupational diseases and suggests that a consensus on the adequacy and reliability of estimates by the Bureau of Labor Statistics and others is not likely. (SK)

  18. [Prognostic estimation in critical patients. Validation of a new and very simple system of prognostic estimation of survival in an intensive care unit].

    PubMed

    Abizanda, R; Padron, A; Vidal, B; Mas, S; Belenguer, A; Madero, J; Heras, A

    2006-04-01

    To make the validation of a new system of prognostic estimation of survival in critical patients (EPEC) seen in a multidisciplinar Intensive care unit (ICU). Prospective analysis of a patient cohort seen in the ICU of a multidisciplinar Intensive Medicine Service of a reference teaching hospital with 19 beds. Four hundred eighty four patients admitted consecutively over 6 months in 2003. Data collection of a basic minimum data set that includes patient identification data (gender, age), reason for admission and their origin, prognostic estimation of survival by EPEC, MPM II 0 and SAPS II (the latter two considered as gold standard). Mortality was evaluated on hospital discharge. EPEC validation was done with analysis of its discriminating capacity (ROC curve), calibration of its prognostic capacity (Hosmer Lemeshow C test), resolution of the 2 x 2 Contingency tables around different probability values (20, 50, 70 and mean value of prognostic estimation). The standardized mortality rate (SMR) for each one of the methods was calculated. Linear regression of the EPEC regarding the MPM II 0 and SAPS II was established and concordance analyses were done (Bland-Altman test) of the prediction of mortality by the three systems. In spite of an apparently good linear correlation, similar accuracy of prediction and discrimination capacity, EPEC is not well-calibrated (no likelihood of death greater than 50%) and the concordance analyses show that more than 10% of the pairs were outside the 95% confidence interval. In spite of its ease of application and calculation and of incorporating delay of admission in ICU as a variable, EPEC does not offer any predictive advantage on MPM II 0 or SAPS II, and its predictions adapt to reality worse.

  19. Validation of a noninvasive maturity estimate relative to skeletal age in youth football players.

    PubMed

    Malina, Robert M; Dompier, Thomas P; Powell, John W; Barron, Mary J; Moore, Marguerite T

    2007-09-01

    To validate a non-invasive measure of biological maturity (percentage of predicted mature height at a given age) with an established indicator of maturity [skeletal age (SA)] in youth American football players. Cross-sectional. Two communities in central Michigan. 143 youth football players 9.27 to 14.24 years. Height and weight were measured, and hand-wrist radiographs were taken. SA assessed with the Fels method was the criterion measure of maturity status. Chronological age (CA), height, and weight of the player and midparent height were used to predict mature height; current height of the player was expressed as a percentage of his predicted mature height as a noninvasive estimate of biological maturity status. Boys' maturation was classified as late, on time, or early maturing on the basis of the difference between SA and CA and of present height expressed as a percentage of predicted mature height. Kappa coefficients and Spearman rank-order correlations were calculated. Characteristics of players concordant and discordant for maturity classification with SA and percentage of predicted mature height were compared with MANCOVA. Concordance between methods of maturity classification was 62%. The Kappa coefficient, 0.46 (95% CI 0.19 to 0.59) and Spearman rank-order correlation, rs = 0.52 (P < 0.001) were moderate. Players discordant for maturity status varied in midparent height and percentage of predicted mature height, but not in predicted mature height. Percentage of predicted mature height is a reasonably valid estimate of biological maturity status in this sample of youth football players.

  20. Validation of a questionnaire method for estimating extent of menstrual blood loss in young adult women.

    PubMed

    Heath, A L; Skeaff, C M; Gibson, R S

    1999-04-01

    The objective of this study was to validate two indirect methods for estimating the extent of menstrual blood loss against a reference method to determine which method would be most appropriate for use in a population of young adult women. Thirty-two women aged 18 to 29 years (mean +/- SD; 22.4 +/- 2.8) were recruited by poster in Dunedin (New Zealand). Data are presented for 29 women. A recall method and a record method for estimating extent of menstrual loss were validated against a weighed reference method. Spearman rank correlation coefficients between blood loss assessed by Weighed Menstrual Loss and Menstrual Record was rs = 0.47 (p = 0.012), and between Weighed Menstrual Loss and Menstrual Recall, was rs = 0.61 (p = 0.001). The Record method correctly classified 66% of participants into the same tertile, grossly misclassifying 14%. The Recall method correctly classified 59% of participants, grossly misclassifying 7%. Reference method menstrual loss calculated for surrogate categories demonstrated a significant difference between the second and third tertiles for the Record method, and between the first and third tertiles for the Recall method. The Menstrual Recall method can differentiate between low and high levels of menstrual blood loss in young adult women, is quick to complete and analyse, and has a low participant burden.

  1. Development and External Validation of a Melanoma Risk Prediction Model Based on Self-assessed Risk Factors.

    PubMed

    Vuong, Kylie; Armstrong, Bruce K; Weiderpass, Elisabete; Lund, Eiliv; Adami, Hans-Olov; Veierod, Marit B; Barrett, Jennifer H; Davies, John R; Bishop, D Timothy; Whiteman, David C; Olsen, Catherine M; Hopper, John L; Mann, Graham J; Cust, Anne E; McGeechan, Kevin

    2016-08-01

    Identifying individuals at high risk of melanoma can optimize primary and secondary prevention strategies. To develop and externally validate a risk prediction model for incident first-primary cutaneous melanoma using self-assessed risk factors. We used unconditional logistic regression to develop a multivariable risk prediction model. Relative risk estimates from the model were combined with Australian melanoma incidence and competing mortality rates to obtain absolute risk estimates. A risk prediction model was developed using the Australian Melanoma Family Study (629 cases and 535 controls) and externally validated using 4 independent population-based studies: the Western Australia Melanoma Study (511 case-control pairs), Leeds Melanoma Case-Control Study (960 cases and 513 controls), Epigene-QSkin Study (44 544, of which 766 with melanoma), and Swedish Women's Lifestyle and Health Cohort Study (49 259 women, of which 273 had melanoma). We validated model performance internally and externally by assessing discrimination using the area under the receiver operating curve (AUC). Additionally, using the Swedish Women's Lifestyle and Health Cohort Study, we assessed model calibration and clinical usefulness. The risk prediction model included hair color, nevus density, first-degree family history of melanoma, previous nonmelanoma skin cancer, and lifetime sunbed use. On internal validation, the AUC was 0.70 (95% CI, 0.67-0.73). On external validation, the AUC was 0.66 (95% CI, 0.63-0.69) in the Western Australia Melanoma Study, 0.67 (95% CI, 0.65-0.70) in the Leeds Melanoma Case-Control Study, 0.64 (95% CI, 0.62-0.66) in the Epigene-QSkin Study, and 0.63 (95% CI, 0.60-0.67) in the Swedish Women's Lifestyle and Health Cohort Study. Model calibration showed close agreement between predicted and observed numbers of incident melanomas across all deciles of predicted risk. In the external validation setting, there was higher net benefit when using the risk prediction

  2. Family-oriented cardiac risk estimator: a Java web-based applet.

    PubMed

    Crouch, Michael A; Jadhav, Ashwin

    2003-01-01

    We developed a Java applet that calculates four different estimates of a person's 10-year risk for heart attack: (1) Estimate based on Framingham equation (2) Framingham equation estimate modified by C-reactive protein (CRP) level (3) Framingham estimate modified by family history of heart disease in parents or siblings (4) Framingham estimate modified by both CRP and family heart disease history. This web-based, family-oriented cardiac risk estimator uniquely considers family history and CRP while estimating risk.

  3. A citizen science based survey method for estimating the density of urban carnivores.

    PubMed

    Scott, Dawn M; Baker, Rowenna; Charman, Naomi; Karlsson, Heidi; Yarnell, Richard W; Mill, Aileen C; Smith, Graham C; Tolhurst, Bryony A

    2018-01-01

    Globally there are many examples of synanthropic carnivores exploiting growth in urbanisation. As carnivores can come into conflict with humans and are potential vectors of zoonotic disease, assessing densities in suburban areas and identifying factors that influence them are necessary to aid management and mitigation. However, fragmented, privately owned land restricts the use of conventional carnivore surveying techniques in these areas, requiring development of novel methods. We present a method that combines questionnaire distribution to residents with field surveys and GIS, to determine relative density of two urban carnivores in England, Great Britain. We determined the density of: red fox (Vulpes vulpes) social groups in 14, approximately 1km2 suburban areas in 8 different towns and cities; and Eurasian badger (Meles meles) social groups in three suburban areas of one city. Average relative fox group density (FGD) was 3.72 km-2, which was double the estimates for cities with resident foxes in the 1980's. Density was comparable to an alternative estimate derived from trapping and GPS-tracking, indicating the validity of the method. However, FGD did not correlate with a national dataset based on fox sightings, indicating unreliability of the national data to determine actual densities or to extrapolate a national population estimate. Using species-specific clustering units that reflect social organisation, the method was additionally applied to suburban badgers to derive relative badger group density (BGD) for one city (Brighton, 2.41 km-2). We demonstrate that citizen science approaches can effectively obtain data to assess suburban carnivore density, however publicly derived national data sets need to be locally validated before extrapolations can be undertaken. The method we present for assessing densities of foxes and badgers in British towns and cities is also adaptable to other urban carnivores elsewhere. However this transferability is contingent on

  4. A citizen science based survey method for estimating the density of urban carnivores

    PubMed Central

    Baker, Rowenna; Charman, Naomi; Karlsson, Heidi; Yarnell, Richard W.; Mill, Aileen C.; Smith, Graham C.; Tolhurst, Bryony A.

    2018-01-01

    Globally there are many examples of synanthropic carnivores exploiting growth in urbanisation. As carnivores can come into conflict with humans and are potential vectors of zoonotic disease, assessing densities in suburban areas and identifying factors that influence them are necessary to aid management and mitigation. However, fragmented, privately owned land restricts the use of conventional carnivore surveying techniques in these areas, requiring development of novel methods. We present a method that combines questionnaire distribution to residents with field surveys and GIS, to determine relative density of two urban carnivores in England, Great Britain. We determined the density of: red fox (Vulpes vulpes) social groups in 14, approximately 1km2 suburban areas in 8 different towns and cities; and Eurasian badger (Meles meles) social groups in three suburban areas of one city. Average relative fox group density (FGD) was 3.72 km-2, which was double the estimates for cities with resident foxes in the 1980’s. Density was comparable to an alternative estimate derived from trapping and GPS-tracking, indicating the validity of the method. However, FGD did not correlate with a national dataset based on fox sightings, indicating unreliability of the national data to determine actual densities or to extrapolate a national population estimate. Using species-specific clustering units that reflect social organisation, the method was additionally applied to suburban badgers to derive relative badger group density (BGD) for one city (Brighton, 2.41 km-2). We demonstrate that citizen science approaches can effectively obtain data to assess suburban carnivore density, however publicly derived national data sets need to be locally validated before extrapolations can be undertaken. The method we present for assessing densities of foxes and badgers in British towns and cities is also adaptable to other urban carnivores elsewhere. However this transferability is contingent on

  5. Validation of a Crowdsourcing Methodology for Developing a Knowledge Base of Related Problem-Medication Pairs.

    PubMed

    McCoy, A B; Wright, A; Krousel-Wood, M; Thomas, E J; McCoy, J A; Sittig, D F

    2015-01-01

    Clinical knowledge bases of problem-medication pairs are necessary for many informatics solutions that improve patient safety, such as clinical summarization. However, developing these knowledge bases can be challenging. We sought to validate a previously developed crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large, non-university health care system with a widely used, commercially available electronic health record. We first retrieved medications and problems entered in the electronic health record by clinicians during routine care during a six month study period. Following the previously published approach, we calculated the link frequency and link ratio for each pair then identified a threshold cutoff for estimated problem-medication pair appropriateness through clinician review; problem-medication pairs meeting the threshold were included in the resulting knowledge base. We selected 50 medications and their gold standard indications to compare the resulting knowledge base to the pilot knowledge base developed previously and determine its recall and precision. The resulting knowledge base contained 26,912 pairs, had a recall of 62.3% and a precision of 87.5%, and outperformed the pilot knowledge base containing 11,167 pairs from the previous study, which had a recall of 46.9% and a precision of 83.3%. We validated the crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large non-university health care system with a widely used, commercially available electronic health record, indicating that the approach may be generalizable across healthcare settings and clinical systems. Further research is necessary to better evaluate the knowledge, to compare crowdsourcing with other approaches, and to evaluate if incorporating the knowledge into electronic health records improves patient outcomes.

  6. On-Road Validation of a Simplified Model for Estimating Real-World Fuel Economy: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Eric; Gonder, Jeff; Jehlik, Forrest

    On-road fuel economy is known to vary significantly between individual trips in real-world driving conditions. This work introduces a methodology for rapidly simulating a specific vehicle's fuel economy over the wide range of real-world conditions experienced across the country. On-road test data collected using a highly instrumented vehicle is used to refine and validate this modeling approach. Model accuracy relative to on-road data collection is relevant to the estimation of 'off-cycle credits' that compensate for real-world fuel economy benefits that are not observed during certification testing on a chassis dynamometer.

  7. Validation of the CHIRPS Satellite Rainfall Estimates over Eastern of Africa

    NASA Astrophysics Data System (ADS)

    Dinku, T.; Funk, C. C.; Tadesse, T.; Ceccato, P.

    2017-12-01

    Long and temporally consistent rainfall time series are essential in climate analyses and applications. Rainfall data from station observations are inadequate over many parts of the world due to sparse or non-existent observation networks, or limited reporting of gauge observations. As a result, satellite rainfall estimates have been used as an alternative or as a supplement to station observations. However, many satellite-based rainfall products with long time series suffer from coarse spatial and temporal resolutions and inhomogeneities caused by variations in satellite inputs. There are some satellite rainfall products with reasonably consistent time series, but they are often limited to specific geographic areas. The Climate Hazards Group Infrared Precipitation (CHIRP) and CHIRP combined with station observations (CHIRPS) are recently produced satellite-based rainfall products with relatively high spatial and temporal resolutions and quasi-global coverage. In this study, CHIRP and CHIRPS were evaluated over East Africa at daily, dekadal (10-day) and monthly time scales. The evaluation was done by comparing the satellite products with rain gauge data from about 1200 stations. The is unprecedented number of validation stations for this region covering. The results provide a unique region-wide understanding of how satellite products perform over different climatic/geographic (low lands, mountainous regions, and coastal) regions. The CHIRP and CHIRPS products were also compared with two similar satellite rainfall products: the African Rainfall Climatology version 2 (ARC2) and the latest release of the Tropical Applications of Meteorology using Satellite data (TAMSAT). The results show that both CHIRP and CHIRPS products are significantly better than ARC2 with higher skill and low or no bias. These products were also found to be slightly better than the latest version of the TAMSAT product. A comparison was also done between the latest release of the TAMSAT product

  8. Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications

    NASA Astrophysics Data System (ADS)

    Qian, Xuewen; Deng, Honggui; He, Hailang

    2017-10-01

    Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.

  9. Application of the Combination Approach for Estimating Evapotranspiration in Puerto Rico

    NASA Technical Reports Server (NTRS)

    Harmsen, Eric; Luvall, Jeffrey; Gonzalez, Jorge

    2005-01-01

    The ability to estimate short-term fluxes of water vapor from the land surface is important for validating latent heat flux estimates from high resolution remote sensing techniques. A new, relatively inexpensive method is presented for estimating t h e ground-based values of the surface latent heat flux or evapotranspiration.

  10. [Hyperspectral Estimation of Apple Tree Canopy LAI Based on SVM and RF Regression].

    PubMed

    Han, Zhao-ying; Zhu, Xi-cun; Fang, Xian-yi; Wang, Zhuo-yuan; Wang, Ling; Zhao, Geng-Xing; Jiang, Yuan-mao

    2016-03-01

    Leaf area index (LAI) is the dynamic index of crop population size. Hyperspectral technology can be used to estimate apple canopy LAI rapidly and nondestructively. It can be provide a reference for monitoring the tree growing and yield estimation. The Red Fuji apple trees of full bearing fruit are the researching objects. Ninety apple trees canopies spectral reflectance and LAI values were measured by the ASD Fieldspec3 spectrometer and LAI-2200 in thirty orchards in constant two years in Qixia research area of Shandong Province. The optimal vegetation indices were selected by the method of correlation analysis of the original spectral reflectance and vegetation indices. The models of predicting the LAI were built with the multivariate regression analysis method of support vector machine (SVM) and random forest (RF). The new vegetation indices, GNDVI527, ND-VI676, RVI682, FD-NVI656 and GRVI517 and the previous two main vegetation indices, NDVI670 and NDVI705, are in accordance with LAI. In the RF regression model, the calibration set decision coefficient C-R2 of 0.920 and validation set decision coefficient V-R2 of 0.889 are higher than the SVM regression model by 0.045 and 0.033 respectively. The root mean square error of calibration set C-RMSE of 0.249, the root mean square error validation set V-RMSE of 0.236 are lower than that of the SVM regression model by 0.054 and 0.058 respectively. Relative analysis of calibrating error C-RPD and relative analysis of validation set V-RPD reached 3.363 and 2.520, 0.598 and 0.262, respectively, which were higher than the SVM regression model. The measured and predicted the scatterplot trend line slope of the calibration set and validation set C-S and V-S are close to 1. The estimation result of RF regression model is better than that of the SVM. RF regression model can be used to estimate the LAI of red Fuji apple trees in full fruit period.

  11. Knowledge-based system verification and validation

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1990-01-01

    The objective of this task is to develop and evaluate a methodology for verification and validation (V&V) of knowledge-based systems (KBS) for space station applications with high reliability requirements. The approach consists of three interrelated tasks. The first task is to evaluate the effectiveness of various validation methods for space station applications. The second task is to recommend requirements for KBS V&V for Space Station Freedom (SSF). The third task is to recommend modifications to the SSF to support the development of KBS using effectiveness software engineering and validation techniques. To accomplish the first task, three complementary techniques will be evaluated: (1) Sensitivity Analysis (Worchester Polytechnic Institute); (2) Formal Verification of Safety Properties (SRI International); and (3) Consistency and Completeness Checking (Lockheed AI Center). During FY89 and FY90, each contractor will independently demonstrate the user of his technique on the fault detection, isolation, and reconfiguration (FDIR) KBS or the manned maneuvering unit (MMU), a rule-based system implemented in LISP. During FY91, the application of each of the techniques to other knowledge representations and KBS architectures will be addressed. After evaluation of the results of the first task and examination of Space Station Freedom V&V requirements for conventional software, a comprehensive KBS V&V methodology will be developed and documented. Development of highly reliable KBS's cannot be accomplished without effective software engineering methods. Using the results of current in-house research to develop and assess software engineering methods for KBS's as well as assessment of techniques being developed elsewhere, an effective software engineering methodology for space station KBS's will be developed, and modification of the SSF to support these tools and methods will be addressed.

  12. Estimating the Population of Survivors of Suicide: Seeking an Evidence Base

    ERIC Educational Resources Information Center

    Berman, Alan L.

    2011-01-01

    Shneidman (1973) derived an estimate of six survivors for every suicide that, in the ensuing years, has become an assumed fact underlying public health messaging campaigns in support of suicide prevention and postvention programs worldwide, in spite of it lacking either empirical testing or validation. This report offers a first test designed to…

  13. Optical Tracking Data Validation and Orbit Estimation for Sparse Observations of Satellites by the OWL-Net.

    PubMed

    Choi, Jin; Jo, Jung Hyun; Yim, Hong-Suh; Choi, Eun-Jung; Cho, Sungki; Park, Jang-Hyun

    2018-06-07

    An Optical Wide-field patroL-Network (OWL-Net) has been developed for maintaining Korean low Earth orbit (LEO) satellites' orbital ephemeris. The OWL-Net consists of five optical tracking stations. Brightness signals of reflected sunlight of the targets were detected by a charged coupled device (CCD). A chopper system was adopted for fast astrometric data sampling, maximum 50 Hz, within a short observation time. The astrometric accuracy of the optical observation data was validated with precise orbital ephemeris such as Consolidated Prediction File (CPF) data and precise orbit determination result with onboard Global Positioning System (GPS) data from the target satellite. In the optical observation simulation of the OWL-Net for 2017, an average observation span for a single arc of 11 LEO observation targets was about 5 min, while an average optical observation separation time was 5 h. We estimated the position and velocity with an atmospheric drag coefficient of LEO observation targets using a sequential-batch orbit estimation technique after multi-arc batch orbit estimation. Post-fit residuals for the multi-arc batch orbit estimation and sequential-batch orbit estimation were analyzed for the optical measurements and reference orbit (CPF and GPS data). The post-fit residuals with reference show few tens-of-meters errors for in-track direction for multi-arc batch and sequential-batch orbit estimation results.

  14. Estimation of Canopy Sunlit Fraction of Leaf Area from Ground-Based Measurements

    NASA Astrophysics Data System (ADS)

    Yang, B.; Knyazikhin, Y.; Yan, K.; Chen, C.; Park, T.; CHOI, S.; Mottus, M.; Rautiainen, M.; Stenberg, P.; Myneni, R.; Yan, L.

    2015-12-01

    The sunlit fraction of leaf area (SFLA) defined as the fraction of the total hemisurface leaf area illuminated by the direct solar beam is a key structural variable in many global models of climate, hydrology, biogeochemistry and ecology. SFLAI is expected to be a standard product from the Earth Polychromatic Imaging Camera (EPIC) on board the joint NOAA, NASA and US Air Force Deep Space Climate Observatory (DSCOVR) mission, which was successfully launched from Cape Canaveral, Florida on February 11, 2015. The DSCOVR EPIC sensor orbiting the Sun-Earth Lagrange L1 point provides multispectral measurements of the radiation reflected by Earth in retro-illumination directions. This poster discusses a methodology for estimating the SFLA using LAI-2000 Canopy Analyzer, which is expected to underlie the strategy for validation of the DSCOVR EPIC land surface products. LAI-2000 data collected over 18 coniferous and broadleaf sites in Hyytiälä, Central Finland, were used to estimate the SFLA. Field data on canopy geometry were used to simulate selected sites. Their SFLAI was calculated using a Monte Carlo (MC) technique. LAI-2000 estimates of SFLA showed a very good agreement with MC results, suggesting validity of the proposed approach.

  15. Prevalence Estimation and Validation of New Instruments in Psychiatric Research: An Application of Latent Class Analysis and Sensitivity Analysis

    ERIC Educational Resources Information Center

    Pence, Brian Wells; Miller, William C.; Gaynes, Bradley N.

    2009-01-01

    Prevalence and validation studies rely on imperfect reference standard (RS) diagnostic instruments that can bias prevalence and test characteristic estimates. The authors illustrate 2 methods to account for RS misclassification. Latent class analysis (LCA) combines information from multiple imperfect measures of an unmeasurable latent condition to…

  16. A de-noising method using the improved wavelet threshold function based on noise variance estimation

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao

    2018-01-01

    The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.

  17. Validation of satellite-based rainfall in Kalahari

    NASA Astrophysics Data System (ADS)

    Lekula, Moiteela; Lubczynski, Maciek W.; Shemang, Elisha M.; Verhoef, Wouter

    2018-06-01

    Water resources management in arid and semi-arid areas is hampered by insufficient rainfall data, typically obtained from sparsely distributed rain gauges. Satellite-based rainfall estimates (SREs) are alternative sources of such data in these areas. In this study, daily rainfall estimates from FEWS-RFE∼11 km, TRMM-3B42∼27 km, CMOPRH∼27 km and CMORPH∼8 km were evaluated against nine, daily rain gauge records in Central Kalahari Basin (CKB), over a five-year period, 01/01/2001-31/12/2005. The aims were to evaluate the daily rainfall detection capabilities of the four SRE algorithms, analyze the spatio-temporal variability of rainfall in the CKB and perform bias-correction of the four SREs. Evaluation methods included scatter plot analysis, descriptive statistics, categorical statistics and bias decomposition. The spatio-temporal variability of rainfall, was assessed using the SREs' mean annual rainfall, standard deviation, coefficient of variation and spatial correlation functions. Bias correction of the four SREs was conducted using a Time-Varying Space-Fixed bias-correction scheme. The results underlined the importance of validating daily SREs, as they had different rainfall detection capabilities in the CKB. The FEWS-RFE∼11 km performed best, providing better results of descriptive and categorical statistics than the other three SREs, although bias decomposition showed that all SREs underestimated rainfall. The analysis showed that the most reliable SREs performance analysis indicator were the frequency of "miss" rainfall events and the "miss-bias", as they directly indicated SREs' sensitivity and bias of rainfall detection, respectively. The Time Varying and Space Fixed (TVSF) bias-correction scheme, improved some error measures but resulted in the reduction of the spatial correlation distance, thus increased, already high, spatial rainfall variability of all the four SREs. This study highlighted SREs as valuable source of daily rainfall data providing

  18. A generalized groundwater fluctuation model based on precipitation for estimating water table levels of deep unconfined aquifers

    NASA Astrophysics Data System (ADS)

    Jeong, Jina; Park, Eungyu; Shik Han, Weon; Kim, Kue-Young; Suk, Heejun; Beom Jo, Si

    2018-07-01

    A generalized water table fluctuation model based on precipitation was developed using a statistical conceptualization of unsaturated infiltration fluxes. A gamma distribution function was adopted as a transfer function due to its versatility in representing recharge rates with temporally dispersed infiltration fluxes, and a Laplace transformation was used to obtain an analytical solution. To prove the general applicability of the model, convergences with previous water table fluctuation models were shown as special cases. For validation, a few hypothetical cases were developed, where the applicability of the model to a wide range of unsaturated zone conditions was confirmed. For further validation, the model was applied to water table level estimations of three monitoring wells with considerably thick unsaturated zones on Jeju Island. The results show that the developed model represented the pattern of hydrographs from the two monitoring wells fairly well. The lag times from precipitation to recharge estimated from the developed system transfer function were found to agree with those from a conventional cross-correlation analysis. The developed model has the potential to be adopted for the hydraulic characterization of both saturated and unsaturated zones by being calibrated to actual data when extraneous and exogenous causes of water table fluctuation are limited. In addition, as it provides reference estimates, the model can be adopted as a tool for surveilling groundwater resources under hydraulically stressed conditions.

  19. The predictive validity of the Two-Tiered Violence Risk Estimates Scale (TTV) in a long-term follow-up of violent offenders.

    PubMed

    Churcher, Frances P; Mills, Jeremy F; Forth, Adelle E

    2016-08-01

    Over the past few decades many structured risk appraisal measures have been created to respond to this need. The Two-Tiered Violence Risk Estimates Scale (TTV) is a measure designed to integrate both an actuarial estimate of violence risk with critical risk management indicators. The current study examined interrater reliability and the predictive validity of the TTV in a sample of violent offenders (n = 120) over an average follow-up period of 17.75 years. The TTV was retrospectively scored and compared with the Violence Risk Appraisal Guide (VRAG), the Statistical Information of Recidivism Scale-Revised (SIR-R1), and the Psychopathy Checklist-Revised (PCL-R). Approximately 53% of the sample reoffended violently, with an overall recidivism rate of 74%. Although the VRAG was the strongest predictor of violent recidivism in the sample, the Actuarial Risk Estimates (ARE) scale of the TTV produced a small, significant effect. The Risk Management Indicators (RMI) produced nonsignificant area under the curve (AUC) values for all recidivism outcomes. Comparisons between measures using AUC values and Cox regression showed that there were no statistical differences in predictive validity. The results of this research will be used to inform the validation and reliability literature on the TTV, and will contribute to the overall risk assessment literature. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. Modeling and validating the cost and clinical pathway of colorectal cancer.

    PubMed

    Joranger, Paal; Nesbakken, Arild; Hoff, Geir; Sorbye, Halfdan; Oshaug, Arne; Aas, Eline

    2015-02-01

    Cancer is a major cause of morbidity and mortality, and colorectal cancer (CRC) is the third most common cancer in the world. The estimated costs of CRC treatment vary considerably, and if CRC costs in a model are based on empirically estimated total costs of stage I, II, III, or IV treatments, then they lack some flexibility to capture future changes in CRC treatment. The purpose was 1) to describe how to model CRC costs and survival and 2) to validate the model in a transparent and reproducible way. We applied a semi-Markov model with 70 health states and tracked age and time since specific health states (using tunnels and 3-dimensional data matrix). The model parameters are based on an observational study at Oslo University Hospital (2049 CRC patients), the National Patient Register, literature, and expert opinion. The target population was patients diagnosed with CRC. The model followed the patients diagnosed with CRC from the age of 70 until death or 100 years. The study focused on the perspective of health care payers. The model was validated for face validity, internal and external validity, and cross-validity. The validation showed a satisfactory match with other models and empirical estimates for both cost and survival time, without any preceding calibration of the model. The model can be used to 1) address a range of CRC-related themes (general model) like survival and evaluation of the cost of treatment and prevention measures; 2) make predictions from intermediate to final outcomes; 3) estimate changes in resource use and costs due to changing guidelines; and 4) adjust for future changes in treatment and trends over time. The model is adaptable to other populations. © The Author(s) 2014.

  1. Stereovision-based pose and inertia estimation of unknown and uncooperative space objects

    NASA Astrophysics Data System (ADS)

    Pesce, Vincenzo; Lavagna, Michèle; Bevilacqua, Riccardo

    2017-01-01

    Autonomous close proximity operations are an arduous and attractive problem in space mission design. In particular, the estimation of pose, motion and inertia properties of an uncooperative object is a challenging task because of the lack of available a priori information. This paper develops a novel method to estimate the relative position, velocity, angular velocity, attitude and the ratios of the components of the inertia matrix of an uncooperative space object using only stereo-vision measurements. The classical Extended Kalman Filter (EKF) and an Iterated Extended Kalman Filter (IEKF) are used and compared for the estimation procedure. In addition, in order to compute the inertia properties, the ratios of the inertia components are added to the state and a pseudo-measurement equation is considered in the observation model. The relative simplicity of the proposed algorithm could be suitable for an online implementation for real applications. The developed algorithm is validated by numerical simulations in MATLAB using different initial conditions and uncertainty levels. The goal of the simulations is to verify the accuracy and robustness of the proposed estimation algorithm. The obtained results show satisfactory convergence of estimation errors for all the considered quantities. The obtained results, in several simulations, shows some improvements with respect to similar works, which deal with the same problem, present in literature. In addition, a video processing procedure is presented to reconstruct the geometrical properties of a body using cameras. This inertia reconstruction algorithm has been experimentally validated at the ADAMUS (ADvanced Autonomous MUltiple Spacecraft) Lab at the University of Florida. In the future, this different method could be integrated to the inertia ratios estimator to have a complete tool for mass properties recognition.

  2. Software risk management through independent verification and validation

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Zhou, Tong C.; Wood, Ralph

    1995-01-01

    Software project managers need tools to estimate and track project goals in a continuous fashion before, during, and after development of a system. In addition, they need an ability to compare the current project status with past project profiles to validate management intuition, identify problems, and then direct appropriate resources to the sources of problems. This paper describes a measurement-based approach to calculating the risk inherent in meeting project goals that leverages past project metrics and existing estimation and tracking models. We introduce the IV&V Goal/Questions/Metrics model, explain its use in the software development life cycle, and describe our attempts to validate the model through the reverse engineering of existing projects.

  3. Validation and Expected Error Estimation of Suomi-NNP VIIRS Aerosol Optical Thickness and Angstrom Exponent with AERONET

    NASA Technical Reports Server (NTRS)

    Huang, Jingfeng; Kondragunta, Shobha; Laszlo, Istvan; Liu, Hongqing; Remer, Lorraine A.; Zhang, Hai; Superczynski, Stephen; Ciren, Pubu; Holben, Brent N.; Petrenko, Maksym

    2016-01-01

    The new-generation polar-orbiting operational environmental sensor, the Visible Infrared Imaging Radiometer Suite (VIIRS) on board the Suomi National Polar-orbiting Partnership (S-NPP) satellite, provides critical daily global aerosol observations. As older satellite sensors age out, the VIIRS aerosol product will become the primary observational source for global assessments of aerosol emission and transport, aerosol meteorological and climatic effects, air quality monitoring, and public health. To prove their validity and to assess their maturity level, the VIIRS aerosol products were compared to the spatiotemporally matched Aerosol Robotic Network (AERONET)measurements. Over land, the VIIRS aerosol optical thickness (AOT) environmental data record (EDR) exhibits an overall global bias against AERONET of 0.0008 with root-mean-square error(RMSE) of the biases as 0.12. Over ocean, the mean bias of VIIRS AOT EDR is 0.02 with RMSE of the biases as 0.06.The mean bias of VIIRS Ocean Angstrom Exponent (AE) EDR is 0.12 with RMSE of the biases as 0.57. The matchups between each product and its AERONET counterpart allow estimates of expected error in each case. Increased uncertainty in the VIIRS AOT and AE products is linked to specific regions, seasons, surface characteristics, and aerosol types, suggesting opportunity for future modifications as understanding of algorithm assumptions improves. Based on the assessment, the VIIRS AOT EDR over land reached Validated maturity beginning 23 January 2013; the AOT EDR and AE EDR over ocean reached Validated maturity beginning 2 May 2012, excluding the processing error period 15 October to 27 November 2012. These findings demonstrate the integrity and usefulness of the VIIRS aerosol products that will transition from S-NPP to future polar-orbiting environmental satellites in the decades to come and become the standard global aerosol data set as the previous generations missions come to an end.

  4. Robust estimators for speech enhancement in real environments

    NASA Astrophysics Data System (ADS)

    Sandoval-Ibarra, Yuma; Diaz-Ramirez, Victor H.; Kober, Vitaly

    2015-09-01

    Common statistical estimators for speech enhancement rely on several assumptions about stationarity of speech signals and noise. These assumptions may not always valid in real-life due to nonstationary characteristics of speech and noise processes. We propose new estimators based on existing estimators by incorporation of computation of rank-order statistics. The proposed estimators are better adapted to non-stationary characteristics of speech signals and noise processes. Through computer simulations we show that the proposed estimators yield a better performance in terms of objective metrics than that of known estimators when speech signals are contaminated with airport, babble, restaurant, and train-station noise.

  5. Using Neural Networks for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Mattern, Duane L.; Jaw, Link C.; Guo, Ten-Huei; Graham, Ronald; McCoy, William

    1998-01-01

    This paper presents the results of applying two different types of neural networks in two different approaches to the sensor validation problem. The first approach uses a functional approximation neural network as part of a nonlinear observer in a model-based approach to analytical redundancy. The second approach uses an auto-associative neural network to perform nonlinear principal component analysis on a set of redundant sensors to provide an estimate for a single failed sensor. The approaches are demonstrated using a nonlinear simulation of a turbofan engine. The fault detection and sensor estimation results are presented and the training of the auto-associative neural network to provide sensor estimates is discussed.

  6. Three validation metrics for automated probabilistic image segmentation of brain tumours

    PubMed Central

    Zou, Kelly H.; Wells, William M.; Kikinis, Ron; Warfield, Simon K.

    2005-01-01

    SUMMARY The validity of brain tumour segmentation is an important issue in image processing because it has a direct impact on surgical planning. We examined the segmentation accuracy based on three two-sample validation metrics against the estimated composite latent gold standard, which was derived from several experts’ manual segmentations by an EM algorithm. The distribution functions of the tumour and control pixel data were parametrically assumed to be a mixture of two beta distributions with different shape parameters. We estimated the corresponding receiver operating characteristic curve, Dice similarity coefficient, and mutual information, over all possible decision thresholds. Based on each validation metric, an optimal threshold was then computed via maximization. We illustrated these methods on MR imaging data from nine brain tumour cases of three different tumour types, each consisting of a large number of pixels. The automated segmentation yielded satisfactory accuracy with varied optimal thresholds. The performances of these validation metrics were also investigated via Monte Carlo simulation. Extensions of incorporating spatial correlation structures using a Markov random field model were considered. PMID:15083482

  7. Localized Dictionaries Based Orientation Field Estimation for Latent Fingerprints.

    PubMed

    Xiao Yang; Jianjiang Feng; Jie Zhou

    2014-05-01

    Dictionary based orientation field estimation approach has shown promising performance for latent fingerprints. In this paper, we seek to exploit stronger prior knowledge of fingerprints in order to further improve the performance. Realizing that ridge orientations at different locations of fingerprints have different characteristics, we propose a localized dictionaries-based orientation field estimation algorithm, in which noisy orientation patch at a location output by a local estimation approach is replaced by real orientation patch in the local dictionary at the same location. The precondition of applying localized dictionaries is that the pose of the latent fingerprint needs to be estimated. We propose a Hough transform-based fingerprint pose estimation algorithm, in which the predictions about fingerprint pose made by all orientation patches in the latent fingerprint are accumulated. Experimental results on challenging latent fingerprint datasets show the proposed method outperforms previous ones markedly.

  8. Large-scale estimates of gross primary production on the Qinghai-Tibet plateau based on remote sensing data

    NASA Astrophysics Data System (ADS)

    Ma, M., II; Yuan, W.; Dong, J.; Zhang, F.; Cai, W.; Li, H.

    2017-12-01

    Vegetation gross primary production (GPP) is an important variable for the carbon cycle on the Qinghai-Tibetan Plateau (QTP). Based on the measurements from twelve eddy covariance (EC) sites, we validated a light use efficiency model (i.e. EC-LUE) to evaluate the spatial-temporal patterns of GPP and the effect of environmental variables on QTP. The EC-LUE model explained 85.4% of the daily observed GPP variations through all of the twelve EC sites, and characterized very well the seasonal changes of GPP. Annual GPP over the entire QTP ranged from 575 to 703 Tg C, and showed a significantly increasing trend from 1982 to 2013. However, there were large spatial heterogeneities in long-term trends of GPP. Throughout the entire QTP, air temperature TA increase had a greater influence than solar radiation and PREC changes on productivity. Moreover, our results highlight the large uncertainties of previous GPP estimates due to insufficient parameterization and validations. When compared with GPP estimates of the EC-LUE model, most Coupled Model Intercomparison Project (CMIP5) GPP products overestimate the magnitude and increasing trends of regional GPP, which potentially impact the feedback of ecosystems to regional climate changes.

  9. Validation analysis of probabilistic models of dietary exposure to food additives.

    PubMed

    Gilsenan, M B; Thompson, R L; Lambe, J; Gibney, M J

    2003-10-01

    The validity of a range of simple conceptual models designed specifically for the estimation of food additive intakes using probabilistic analysis was assessed. Modelled intake estimates that fell below traditional conservative point estimates of intake and above 'true' additive intakes (calculated from a reference database at brand level) were considered to be in a valid region. Models were developed for 10 food additives by combining food intake data, the probability of an additive being present in a food group and additive concentration data. Food intake and additive concentration data were entered as raw data or as a lognormal distribution, and the probability of an additive being present was entered based on the per cent brands or the per cent eating occasions within a food group that contained an additive. Since the three model components assumed two possible modes of input, the validity of eight (2(3)) model combinations was assessed. All model inputs were derived from the reference database. An iterative approach was employed in which the validity of individual model components was assessed first, followed by validation of full conceptual models. While the distribution of intake estimates from models fell below conservative intakes, which assume that the additive is present at maximum permitted levels (MPLs) in all foods in which it is permitted, intake estimates were not consistently above 'true' intakes. These analyses indicate the need for more complex models for the estimation of food additive intakes using probabilistic analysis. Such models should incorporate information on market share and/or brand loyalty.

  10. Estimating time-based instantaneous total mortality rate based on the age-structured abundance index

    NASA Astrophysics Data System (ADS)

    Wang, Yingbin; Jiao, Yan

    2015-05-01

    The instantaneous total mortality rate ( Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis, abundance and catch forecast, and fisheries management. A catch curve-based method for estimating time-based Z and its change trend from catch per unit effort (CPUE) data of multiple cohorts is developed. Unlike the traditional catch-curve method, the method developed here does not need the assumption of constant Z throughout the time, but the Z values in n continuous years are assumed constant, and then the Z values in different n continuous years are estimated using the age-based CPUE data within these years. The results of the simulation analyses show that the trends of the estimated time-based Z are consistent with the trends of the true Z, and the estimated rates of change from this approach are close to the true change rates (the relative differences between the change rates of the estimated Z and the true Z are smaller than 10%). Variations of both Z and recruitment can affect the estimates of Z value and the trend of Z. The most appropriate value of n can be different given the effects of different factors. Therefore, the appropriate value of n for different fisheries should be determined through a simulation analysis as we demonstrated in this study. Further analyses suggested that selectivity and age estimation are also two factors that can affect the estimated Z values if there is error in either of them, but the estimated change rates of Z are still close to the true change rates. We also applied this approach to the Atlantic cod ( Gadus morhua) fishery of eastern Newfoundland and Labrador from 1983 to 1997, and obtained reasonable estimates of time-based Z.

  11. A new lithium-ion battery internal temperature on-line estimate method based on electrochemical impedance spectroscopy measurement

    NASA Astrophysics Data System (ADS)

    Zhu, J. G.; Sun, Z. C.; Wei, X. Z.; Dai, H. F.

    2015-01-01

    The power battery thermal management problem in EV (electric vehicle) and HEV (hybrid electric vehicle) has been widely discussed, and EIS (electrochemical impedance spectroscopy) is an effective experimental method to test and estimate the status of the battery. Firstly, an electrochemical-based impedance matrix analysis for lithium-ion battery is developed to describe the impedance response of electrochemical impedance spectroscopy. Then a method, based on electrochemical impedance spectroscopy measurement, has been proposed to estimate the internal temperature of power lithium-ion battery by analyzing the phase shift and magnitude of impedance at different ambient temperatures. Respectively, the SoC (state of charge) and temperature have different effects on the impedance characteristics of battery at various frequency ranges in the electrochemical impedance spectroscopy experimental study. Also the impedance spectrum affected by SoH (state of health) is discussed in the paper preliminary. Therefore, the excitation frequency selected to estimate the inner temperature is in the frequency range which is significantly influenced by temperature without the SoC and SoH. The intrinsic relationship between the phase shift and temperature is established under the chosen excitation frequency. And the magnitude of impedance related to temperature is studied in the paper. In practical applications, through obtaining the phase shift and magnitude of impedance, the inner temperature estimation could be achieved. Then the verification experiments are conduced to validate the estimate method. Finally, an estimate strategy and an on-line estimation system implementation scheme utilizing battery management system are presented to describe the engineering value.

  12. Search-free license plate localization based on saliency and local variance estimation

    NASA Astrophysics Data System (ADS)

    Safaei, Amin; Tang, H. L.; Sanei, S.

    2015-02-01

    In recent years, the performance and accuracy of automatic license plate number recognition (ALPR) systems have greatly improved, however the increasing number of applications for such systems have made ALPR research more challenging than ever. The inherent computational complexity of search dependent algorithms remains a major problem for current ALPR systems. This paper proposes a novel search-free method of localization based on the estimation of saliency and local variance. Gabor functions are then used to validate the choice of candidate license plate. The algorithm was applied to three image datasets with different levels of complexity and the results compared with a number of benchmark methods, particularly in terms of speed. The proposed method outperforms the state of the art methods and can be used for real time applications.

  13. Estimation of hand hygiene opportunities on an adult medical ward using 24-hour camera surveillance: validation of the HOW2 Benchmark Study.

    PubMed

    Diller, Thomas; Kelly, J William; Blackhurst, Dawn; Steed, Connie; Boeker, Sue; McElveen, Danielle C

    2014-06-01

    We previously published a formula to estimate the number of hand hygiene opportunities (HHOs) per patient-day using the World Health Organization's "Five Moments for Hand Hygiene" methodology (HOW2 Benchmark Study). HHOs can be used as a denominator for calculating hand hygiene compliance rates when product utilization data are available. This study validates the previously derived HHO estimate using 24-hour video surveillance of health care worker hand hygiene activity. The validation study utilized 24-hour video surveillance recordings of 26 patients' hospital stays to measure the actual number of HHOs per patient-day on a medicine ward in a large teaching hospital. Statistical methods were used to compare these results to those obtained by episodic observation of patient activity in the original derivation study. Total hours of data collection were 81.3 and 1,510.8, resulting in 1,740 and 4,522 HHOs in the derivation and validation studies, respectively. Comparisons of the mean and median HHOs per 24-hour period did not differ significantly. HHOs were 71.6 (95% confidence interval: 64.9-78.3) and 73.9 (95% confidence interval: 69.1-84.1), respectively. This study validates the HOW2 Benchmark Study and confirms that expected numbers of HHOs can be estimated from the unit's patient census and patient-to-nurse ratio. These data can be used as denominators in calculations of hand hygiene compliance rates from electronic monitoring using the "Five Moments for Hand Hygiene" methodology. Copyright © 2014 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  14. Construct Validity Evidence for Single-Response Items to Estimate Physical Activity Levels in Large Sample Studies

    ERIC Educational Resources Information Center

    Jackson, Allen W.; Morrow, James R., Jr.; Bowles, Heather R.; FitzGerald, Shannon J.; Blair, Steven N.

    2007-01-01

    Valid measurement of physical activity is important for studying the risks for morbidity and mortality. The purpose of this study was to examine evidence of construct validity of two similar single-response items assessing physical activity via self-report. Both items are based on the stages of change model. The sample was 687 participants (men =…

  15. Systematic feature selection improves accuracy of methylation-based forensic age estimation in Han Chinese males.

    PubMed

    Feng, Lei; Peng, Fuduan; Li, Shanfei; Jiang, Li; Sun, Hui; Ji, Anquan; Zeng, Changqing; Li, Caixia; Liu, Fan

    2018-03-23

    Estimating individual age from biomarkers may provide key information facilitating forensic investigations. Recent progress has shown DNA methylation at age-associated CpG sites as the most informative biomarkers for estimating the individual age of an unknown donor. Optimal feature selection plays a critical role in determining the performance of the final prediction model. In this study we investigate methylation levels at 153 age-associated CpG sites from 21 previously reported genomic regions using the EpiTYPER system for their predictive power on individual age in 390 Han Chinese males ranging from 15 to 75 years of age. We conducted a systematic feature selection using a stepwise backward multiple linear regression analysis as well as an exhaustive searching algorithm. Both approaches identified the same subset of 9 CpG sites, which in linear combination provided the optimal model fitting with mean absolute deviation (MAD) of 2.89 years of age and explainable variance (R 2 ) of 0.92. The final model was validated in two independent Han Chinese male samples (validation set 1, N = 65, MAD = 2.49, R 2  = 0.95, and validation set 2, N = 62, MAD = 3.36, R 2  = 0.89). Other competing models such as support vector machine and artificial neural network did not outperform the linear model to any noticeable degree. The validation set 1 was additionally analyzed using Pyrosequencing technology for cross-platform validation and was termed as validation set 3. Directly applying our model, in which the methylation levels were detected by the EpiTYPER system, to the data from pyrosequencing technology showed, however, less accurate results in terms of MAD (validation set 3, N = 65 Han Chinese males, MAD = 4.20, R 2  = 0.93), suggesting the presence of a batch effect between different data generation platforms. This batch effect could be partially overcome by a z-score transformation (MAD = 2.76, R 2  = 0.93). Overall, our

  16. The Effectiveness of Using Limited Gauge Measurements for Bias Adjustment of Satellite-Based Precipitation Estimation over Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Alharbi, Raied; Hsu, Kuolin; Sorooshian, Soroosh; Braithwaite, Dan

    2018-01-01

    Precipitation is a key input variable for hydrological and climate studies. Rain gauges are capable of providing reliable precipitation measurements at point scale. However, the uncertainty of rain measurements increases when the rain gauge network is sparse. Satellite -based precipitation estimations appear to be an alternative source of precipitation measurements, but they are influenced by systematic bias. In this study, a method for removing the bias from the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) over a region where the rain gauge is sparse is investigated. The method consists of monthly empirical quantile mapping, climate classification, and inverse-weighted distance method. Daily PERSIANN-CCS is selected to test the capability of the method for removing the bias over Saudi Arabia during the period of 2010 to 2016. The first six years (2010 - 2015) are calibrated years and 2016 is used for validation. The results show that the yearly correlation coefficient was enhanced by 12%, the yearly mean bias was reduced by 93% during validated year. Root mean square error was reduced by 73% during validated year. The correlation coefficient, the mean bias, and the root mean square error show that the proposed method removes the bias on PERSIANN-CCS effectively that the method can be applied to other regions where the rain gauge network is sparse.

  17. Dietary intakes assessed by 24-h recalls in peri-urban African adolescents: validity of energy intake compared with estimated energy expenditure.

    PubMed

    Rankin, D; Ellis, S M; Macintyre, U E; Hanekom, S M; Wright, H H

    2011-08-01

    The objective of this study is to determine the relative validity of reported energy intake (EI) derived from multiple 24-h recalls against estimated energy expenditure (EE(est)). Basal metabolic rate (BMR) equations and physical activity factors were incorporated to calculate EE(est). This analysis was nested in the multidisciplinary PhysicaL Activity in the Young study with a prospective study design. Peri-urban black South African adolescents were investigated in a subsample of 131 learners (87 girls and 44 boys) from the parent study sample of 369 (211 girls and 158 boys) who had all measurements taken. Pearson correlation coefficients and Bland-Altman plots were calculated to identify the most accurate published equations to estimate BMR (P<0.05 statistically significant). EE(est) was estimated using BMR equations and estimated physical activity factors derived from Previous Day Physical Activity Recall questionnaires. After calculation of EE(est), the relative validity of reported energy intake (EI(rep)) derived from multiple 24-h recalls was tested for three data subsets using Pearson correlation coefficients. Goldberg's formula identified cut points (CPs) for under and over reporting of EI. Pearson correlation coefficients between calculated BMRs ranged from 0.97 to 0.99. Bland-Altman analyses showed acceptable agreement (two equations for each gender). One equation for each gender was used to calculate EE(est). Pearson correlation coefficients between EI(rep) and EE(est) for three data sets were weak, indicating poor agreement. CPs for physical activity groups showed under reporting in 87% boys and 95% girls. The 24-h recalls measured at five measurements over 2 years offered poor validity between EI(rep) and EE(est).

  18. Sparse estimation of model-based diffuse thermal dust emission

    NASA Astrophysics Data System (ADS)

    Irfan, Melis O.; Bobin, Jérôme

    2018-03-01

    Component separation for the Planck High Frequency Instrument (HFI) data is primarily concerned with the estimation of thermal dust emission, which requires the separation of thermal dust from the cosmic infrared background (CIB). For that purpose, current estimation methods rely on filtering techniques to decouple thermal dust emission from CIB anisotropies, which tend to yield a smooth, low-resolution, estimation of the dust emission. In this paper, we present a new parameter estimation method, premise: Parameter Recovery Exploiting Model Informed Sparse Estimates. This method exploits the sparse nature of thermal dust emission to calculate all-sky maps of thermal dust temperature, spectral index, and optical depth at 353 GHz. premise is evaluated and validated on full-sky simulated data. We find the percentage difference between the premise results and the true values to be 2.8, 5.7, and 7.2 per cent at the 1σ level across the full sky for thermal dust temperature, spectral index, and optical depth at 353 GHz, respectively. A comparison between premise and a GNILC-like method over selected regions of our sky simulation reveals that both methods perform comparably within high signal-to-noise regions. However, outside of the Galactic plane, premise is seen to outperform the GNILC-like method with increasing success as the signal-to-noise ratio worsens.

  19. Convergent Validity Evidence regarding the Validity of the Chilean Standards-Based Teacher Evaluation System

    ERIC Educational Resources Information Center

    Santelices, Maria Veronica; Taut, Sandy

    2011-01-01

    This paper describes convergent validity evidence regarding the mandatory, standards-based Chilean national teacher evaluation system (NTES). The study examined whether NTES identifies--and thereby rewards or punishes--the "right" teachers as high- or low-performing. We collected in-depth teaching performance data on a sample of 58…

  20. A methodology to estimate representativeness of LAI station observation for validation: a case study with Chinese Ecosystem Research Network (CERN) in situ data

    NASA Astrophysics Data System (ADS)

    Xu, Baodong; Li, Jing; Liu, Qinhuo; Zeng, Yelu; Yin, Gaofei

    2014-11-01

    Leaf Area Index (LAI) is known as a key vegetation biophysical variable. To effectively use remote sensing LAI products in various disciplines, it is critical to understand the accuracy of them. The common method for the validation of LAI products is firstly establish the empirical relationship between the field data and high-resolution imagery, to derive LAI maps, then aggregate high-resolution LAI maps to match moderate-resolution LAI products. This method is just suited for the small region, and its frequencies of measurement are limited. Therefore, the continuous observing LAI datasets from ground station network are important for the validation of multi-temporal LAI products. However, due to the scale mismatch between the point observation in the ground station and the pixel observation, the direct comparison will bring the scale error. Thus it is needed to evaluate the representativeness of ground station measurement within pixel scale of products for the reasonable validation. In this paper, a case study with Chinese Ecosystem Research Network (CERN) in situ data was taken to introduce a methodology to estimate representativeness of LAI station observation for validating LAI products. We first analyzed the indicators to evaluate the observation representativeness, and then graded the station measurement data. Finally, the LAI measurement data which can represent the pixel scale was used to validate the MODIS, GLASS and GEOV1 LAI products. The result shows that the best agreement is reached between the GLASS and GEOV1, while the lowest uncertainty is achieved by GEOV1 followed by GLASS and MODIS. We conclude that the ground station measurement data can validate multi-temporal LAI products objectively based on the evaluation indicators of station observation representativeness, which can also improve the reliability for the validation of remote sensing products.

  1. Validation of a Crowdsourcing Methodology for Developing a Knowledge Base of Related Problem-Medication Pairs

    PubMed Central

    Wright, A.; Krousel-Wood, M.; Thomas, E. J.; McCoy, J. A.; Sittig, D. F.

    2015-01-01

    Summary Background Clinical knowledge bases of problem-medication pairs are necessary for many informatics solutions that improve patient safety, such as clinical summarization. However, developing these knowledge bases can be challenging. Objective We sought to validate a previously developed crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large, non-university health care system with a widely used, commercially available electronic health record. Methods We first retrieved medications and problems entered in the electronic health record by clinicians during routine care during a six month study period. Following the previously published approach, we calculated the link frequency and link ratio for each pair then identified a threshold cutoff for estimated problem-medication pair appropriateness through clinician review; problem-medication pairs meeting the threshold were included in the resulting knowledge base. We selected 50 medications and their gold standard indications to compare the resulting knowledge base to the pilot knowledge base developed previously and determine its recall and precision. Results The resulting knowledge base contained 26,912 pairs, had a recall of 62.3% and a precision of 87.5%, and outperformed the pilot knowledge base containing 11,167 pairs from the previous study, which had a recall of 46.9% and a precision of 83.3%. Conclusions We validated the crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large non-university health care system with a widely used, commercially available electronic health record, indicating that the approach may be generalizable across healthcare settings and clinical systems. Further research is necessary to better evaluate the knowledge, to compare crowdsourcing with other approaches, and to evaluate if incorporating the knowledge into electronic health records improves patient outcomes. PMID:26171079

  2. Distributed Damage Estimation for Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil

    2011-01-01

    Model-based prognostics approaches capture system knowledge in the form of physics-based models of components, and how they fail. These methods consist of a damage estimation phase, in which the health state of a component is estimated, and a prediction phase, in which the health state is projected forward in time to determine end of life. However, the damage estimation problem is often multi-dimensional and computationally intensive. We propose a model decomposition approach adapted from the diagnosis community, called possible conflicts, in order to both improve the computational efficiency of damage estimation, and formulate a damage estimation approach that is inherently distributed. Local state estimates are combined into a global state estimate from which prediction is performed. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the approach.

  3. Simple, sensitive, selective and validated spectrophotometric methods for the estimation of a biomarker trigonelline from polyherbal gels

    NASA Astrophysics Data System (ADS)

    Chopra, Shruti; Motwani, Sanjay K.; Ahmad, Farhan J.; Khar, Roop K.

    2007-11-01

    Simple, accurate, reproducible, selective, sensitive and cost effective UV-spectrophotometric methods were developed and validated for the estimation of trigonelline in bulk and pharmaceutical formulations. Trigonelline was estimated at 265 nm in deionised water and at 264 nm in phosphate buffer (pH 4.5). Beer's law was obeyed in the concentration ranges of 1-20 μg mL -1 ( r2 = 0.9999) in deionised water and 1-24 μg mL -1 ( r2 = 0.9999) in the phosphate buffer medium. The apparent molar absorptivity and Sandell's sensitivity coefficient were found to be 4.04 × 10 3 L mol -1 cm -1 and 0.0422 μg cm -2/0.001A in deionised water; and 3.05 × 10 3 L mol -1 cm -1 and 0.0567 μg cm -2/0.001A in phosphate buffer media, respectively. These methods were tested and validated for various parameters according to ICH guidelines. The detection and quantitation limits were found to be 0.12 and 0.37 μg mL -1 in deionised water and 0.13 and 0.40 μg mL -1 in phosphate buffer medium, respectively. The proposed methods were successfully applied for the determination of trigonelline in pharmaceutical formulations (vaginal tablets and bioadhesive vaginal gels). The results demonstrated that the procedure is accurate, precise, specific and reproducible (percent relative standard deviation <2%), while being simple and less time consuming and hence can be suitably applied for the estimation of trigonelline in different dosage forms and dissolution studies.

  4. The development and validation of new equations for estimating body fat percentage among Chinese men and women.

    PubMed

    Liu, Xin; Sun, Qi; Sun, Liang; Zong, Geng; Lu, Ling; Liu, Gang; Rosner, Bernard; Ye, Xingwang; Li, Huaixing; Lin, Xu

    2015-05-14

    Equations based on simple anthropometric measurements to predict body fat percentage (BF%) are lacking in Chinese population with increasing prevalence of obesity and related abnormalities. We aimed to develop and validate BF% equations in two independent population-based samples of Chinese men and women. The equations were developed among 960 Chinese Hans living in Shanghai (age 46.2 (SD 5.3) years; 36.7% male) using a stepwise linear regression and were subsequently validated in 1150 Shanghai residents (58.7 (SD 6.0) years; 41.7% male; 99% Chinese Hans, 1% Chinese minorities). The associations of equation-derived BF% with changes of 6-year cardiometabolic outcomes and incident type 2 diabetes (T2D) were evaluated in a sub-cohort of 780 Chinese, compared with BF% measured by dual-energy X-ray absorptiometry (DXA; BF%-DXA). Sex-specific equations were established with age, BMI and waist circumference as independent variables. The BF% calculated using new sex-specific equations (BF%-CSS) were in reasonable agreement with BF%-DXA (mean difference: 0.08 (2 SD 6.64) %, P= 0.606 in men; 0.45 (2 SD 6.88) %, P< 0.001 in women). In multivariate-adjusted models, the BF%-CSS and BF%-DXA showed comparable associations with 6-year changes in TAG, HDL-cholesterol, diastolic blood pressure, C-reactive protein and uric acid (P for comparisons ≥ 0.05). Meanwhile, the BF%-CSS and BF%-DXA had comparable areas under the receiver operating characteristic curves for associations with incident T2D (men P= 0.327; women P= 0.159). The BF% equations might be used as surrogates for DXA to estimate BF% among adult Chinese. More studies are needed to evaluate the application of our equations in different populations.

  5. A novel SURE-based criterion for parametric PSF estimation.

    PubMed

    Xue, Feng; Blu, Thierry

    2015-02-01

    We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.

  6. Validity of a Semi-Quantitative Food Frequency Questionnaire for Collegiate Athletes

    PubMed Central

    Sasaki, Kazuto; Suzuki, Yoshio; Oguma, Nobuhide; Ishihara, Junko; Nakai, Ayumi; Yasuda, Jun; Yokoyama, Yuri; Yoshizaki, Takahiro; Tada, Yuki; Hida, Azumi; Kawano, Yukari

    2016-01-01

    Background Food frequency questionnaires (FFQs) have been developed and validated for various populations. To our knowledge, however, no FFQ has been validated for young athletes. Here, we investigated whether an FFQ that was developed and validated to estimate dietary intake in middle-aged persons was also valid for estimating that in young athletes. Methods We applied an FFQ that had been developed for the Japan Public Health Center-based Prospective Cohort Study with modification to the duration of recollection. A total of 156 participants (92 males) completed the FFQ and a 3-day non-consecutive 24-hour dietary recall (24hDR). Validity of the mean estimates was evaluated by calculating the percentage differences between the 24hDR and FFQ. Ranking estimation was validated using Spearman’s correlation coefficient (CC), and the degree of miscategorization was determined by joint classification. Results The FFQ underestimated energy intake by approximately 10% for both males and females. For 35 nutrients, the median (range) deattenuated CC was 0.30 (0.10 to 0.57) for males and 0.32 (−0.08 to 0.62) for females. For 19 food groups, the median (range) deattenuated CC was 0.32 (0.17 to 0.72) for males and 0.34 (−0.11 to 0.58) for females. For both nutrient and food group intakes, cross-classification analysis indicated extreme miscategorization rates of 3% to 5%. Conclusions An FFQ developed and validated for middle-aged persons had comparable validity among young athletes. This FFQ might be useful for assessing habitual dietary intake in collegiate athletes, especially for calcium, vitamin C, vegetables, fruits, and milk and dairy products. PMID:26902164

  7. Improving satellite-based PM2.5 estimates in China using Gaussian processes modeling in a Bayesian hierarchical setting.

    PubMed

    Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun

    2017-08-01

    Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM 2.5 is a promising way to fill the areas that are not covered by ground PM 2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM 2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM 2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R 2  = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM 2.5 estimates.

  8. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based

  9. Model-based estimation and control for off-axis parabolic mirror alignment

    NASA Astrophysics Data System (ADS)

    Fang, Joyce; Savransky, Dmitry

    2018-02-01

    This paper propose an model-based estimation and control method for an off-axis parabolic mirror (OAP) alignment. Current studies in automated optical alignment systems typically require additional wavefront sensors. We propose a self-aligning method using only focal plane images captured by the existing camera. Image processing methods and Karhunen-Loève (K-L) decomposition are used to extract measurements for the observer in closed-loop control system. Our system has linear dynamic in state transition, and a nonlinear mapping from the state to the measurement. An iterative extended Kalman filter (IEKF) is shown to accurately predict the unknown states, and nonlinear observability is discussed. Linear-quadratic regulator (LQR) is applied to correct the misalignments. The method is validated experimentally on the optical bench with a commercial OAP. We conduct 100 tests in the experiment to demonstrate the consistency in between runs.

  10. Model-based estimation for dynamic cardiac studies using ECT.

    PubMed

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  11. Model validation and error estimation of tsunami runup using high resolution data in Sadeng Port, Gunungkidul, Yogyakarta

    NASA Astrophysics Data System (ADS)

    Basith, Abdul; Prakoso, Yudhono; Kongko, Widjo

    2017-07-01

    A tsunami model using high resolution geometric data is indispensable in efforts to tsunami mitigation, especially in tsunami prone areas. It is one of the factors that affect the accuracy results of numerical modeling of tsunami. Sadeng Port is a new infrastructure in the Southern Coast of Java which could potentially hit by massive tsunami from seismic gap. This paper discusses validation and error estimation of tsunami model created using high resolution geometric data in Sadeng Port. Tsunami model validation uses the height wave of Tsunami Pangandaran 2006 recorded by Tide Gauge of Sadeng. Tsunami model will be used to accommodate the tsunami numerical modeling involves the parameters of earthquake-tsunami which is derived from the seismic gap. The validation results using t-test (student) shows that the height of the tsunami modeling results and observation in Tide Gauge of Sadeng are considered statistically equal at 95% confidence level and the value of the RMSE and NRMSE are 0.428 m and 22.12%, while the differences of tsunami wave travel time is 12 minutes.

  12. Validity in work-based assessment: expanding our horizons.

    PubMed

    Govaerts, Marjan; van der Vleuten, Cees P M

    2013-12-01

    Although work-based assessments (WBA) may come closest to assessing habitual performance, their use for summative purposes is not undisputed. Most criticism of WBA stems from approaches to validity consistent with the quantitative psychometric framework. However, there is increasing research evidence that indicates that the assumptions underlying the predictive, deterministic framework of psychometrics may no longer hold. In this discussion paper we argue that meaningfulness and appropriateness of current validity evidence can be called into question and that we need alternative strategies to assessment and validity inquiry that build on current theories of learning and performance in complex and dynamic workplace settings. Drawing from research in various professional fields we outline key issues within the mechanisms of learning, competence and performance in the context of complex social environments and illustrate their relevance to WBA. In reviewing recent socio-cultural learning theory and research on performance and performance interpretations in work settings, we demonstrate that learning, competence (as inferred from performance) as well as performance interpretations are to be seen as inherently contextualised, and can only be under-stood 'in situ'. Assessment in the context of work settings may, therefore, be more usefully viewed as a socially situated interpretive act. We propose constructivist-interpretivist approaches towards WBA in order to capture and understand contextualised learning and performance in work settings. Theoretical assumptions underlying interpretivist assessment approaches call for a validity theory that provides the theoretical framework and conceptual tools to guide the validation process in the qualitative assessment inquiry. Basic principles of rigour specific to qualitative research have been established, and they can and should be used to determine validity in interpretivist assessment approaches. If used properly, these

  13. Novel SVM-based technique to improve rainfall estimation over the Mediterranean region (north of Algeria) using the multispectral MSG SEVIRI imagery

    NASA Astrophysics Data System (ADS)

    Sehad, Mounir; Lazri, Mourad; Ameur, Soltane

    2017-03-01

    In this work, a new rainfall estimation technique based on the high spatial and temporal resolution of the Spinning Enhanced Visible and Infra Red Imager (SEVIRI) aboard the Meteosat Second Generation (MSG) is presented. This work proposes efficient scheme rainfall estimation based on two multiclass support vector machine (SVM) algorithms: SVM_D for daytime and SVM_N for night time rainfall estimations. Both SVM models are trained using relevant rainfall parameters based on optical, microphysical and textural cloud proprieties. The cloud parameters are derived from the Spectral channels of the SEVIRI MSG radiometer. The 3-hourly and daily accumulated rainfall are derived from the 15 min-rainfall estimation given by the SVM classifiers for each MSG observation image pixel. The SVMs were trained with ground meteorological radar precipitation scenes recorded from November 2006 to March 2007 over the north of Algeria located in the Mediterranean region. Further, the SVM_D and SVM_N models were used to estimate 3-hourly and daily rainfall using data set gathered from November 2010 to March 2011 over north Algeria. The results were validated against collocated rainfall observed by rain gauge network. Indeed, the statistical scores given by correlation coefficient, bias, root mean square error and mean absolute error, showed good accuracy of rainfall estimates by the present technique. Moreover, rainfall estimates of our technique were compared with two high accuracy rainfall estimates methods based on MSG SEVIRI imagery namely: random forests (RF) based approach and an artificial neural network (ANN) based technique. The findings of the present technique indicate higher correlation coefficient (3-hourly: 0.78; daily: 0.94), and lower mean absolute error and root mean square error values. The results show that the new technique assign 3-hourly and daily rainfall with good and better accuracy than ANN technique and (RF) model.

  14. Small Launch Vehicle Trade Space Definition: Development of a Zero Level Mass Estimation Tool with Trajectory Validation

    NASA Technical Reports Server (NTRS)

    Waters, Eric D.

    2013-01-01

    Recent high level interest in the capability of small launch vehicles has placed significant demand on determining the trade space these vehicles occupy. This has led to the development of a zero level analysis tool that can quickly determine the minimum expected vehicle gross liftoff weight (GLOW) in terms of vehicle stage specific impulse (Isp) and propellant mass fraction (pmf) for any given payload value. Utilizing an extensive background in Earth to orbit trajectory experience a total necessary delta v the vehicle must achieve can be estimated including relevant loss terms. This foresight into expected losses allows for more specific assumptions relating to the initial estimates of thrust to weight values for each stage. This tool was further validated against a trajectory model, in this case the Program to Optimize Simulated Trajectories (POST), to determine if the initial sizing delta v was adequate to meet payload expectations. Presented here is a description of how the tool is setup and the approach the analyst must take when using the tool. Also, expected outputs which are dependent on the type of small launch vehicle being sized will be displayed. The method of validation will be discussed as well as where the sizing tool fits into the vehicle design process.

  15. Cost Validation Using PRICE H

    NASA Technical Reports Server (NTRS)

    Jack, John; Kwan, Eric; Wood, Milana

    2011-01-01

    PRICE H was introduced into the JPL cost estimation tool set circa 2003. It became more available at JPL when IPAO funded the NASA-wide site license for all NASA centers. PRICE H was mainly used as one of the cost tools to validate proposal grassroots cost estimates. Program offices at JPL view PRICE H as an additional crosscheck to Team X (JPL Concurrent Engineering Design Center) estimates. PRICE H became widely accepted ca, 2007 at JPL when the program offices moved away from grassroots cost estimation for Step 1 proposals. PRICE H is now one of the key cost tools used for cost validation, cost trades, and independent cost estimates.

  16. Novel Equations for Estimating Lean Body Mass in Peritoneal Dialysis Patients

    PubMed Central

    Dong, Jie; Li, Yan-Jun; Xu, Rong; Yang, Zhi-Kai; Zheng, Ying-Dong

    2015-01-01

    ♦ Objectives: To develop and validate equations for estimating lean body mass (LBM) in peritoneal dialysis (PD) patients. ♦ Methods: Two equations for estimating LBM, one based on mid-arm muscle circumference (MAMC) and hand grip strength (HGS), i.e., LBM-M-H, and the other based on HGS, i.e., LBM-H, were developed and validated with LBM obtained by dual-energy X-ray absorptiometry (DEXA). The developed equations were compared to LBM estimated from creatinine kinetics (LBM-CK) and anthropometry (LBM-A) in terms of bias, precision, and accuracy. The prognostic values of LBM estimated from the equations in all-cause mortality risk were assessed. ♦ Results: The developed equations incorporated gender, height, weight, and dialysis duration. Compared to LBM-DEXA, the bias of the developed equations was lower than that of LBM-CK and LBM-A. Additionally, LBM-M-H and LBM-H had better accuracy and precision. The prognostic values of LBM in all-cause mortality risk based on LBM-M-H, LBM-H, LBM-CK, and LBM-A were similar. ♦ Conclusions: Lean body mass estimated by the new equations based on MAMC and HGS was correlated with LBM obtained by DEXA and may serve as practical surrogate markers of LBM in PD patients. PMID:26293839

  17. Novel Equations for Estimating Lean Body Mass in Peritoneal Dialysis Patients.

    PubMed

    Dong, Jie; Li, Yan-Jun; Xu, Rong; Yang, Zhi-Kai; Zheng, Ying-Dong

    2015-12-01

    ♦ To develop and validate equations for estimating lean body mass (LBM) in peritoneal dialysis (PD) patients. ♦ Two equations for estimating LBM, one based on mid-arm muscle circumference (MAMC) and hand grip strength (HGS), i.e., LBM-M-H, and the other based on HGS, i.e., LBM-H, were developed and validated with LBM obtained by dual-energy X-ray absorptiometry (DEXA). The developed equations were compared to LBM estimated from creatinine kinetics (LBM-CK) and anthropometry (LBM-A) in terms of bias, precision, and accuracy. The prognostic values of LBM estimated from the equations in all-cause mortality risk were assessed. ♦ The developed equations incorporated gender, height, weight, and dialysis duration. Compared to LBM-DEXA, the bias of the developed equations was lower than that of LBM-CK and LBM-A. Additionally, LBM-M-H and LBM-H had better accuracy and precision. The prognostic values of LBM in all-cause mortality risk based on LBM-M-H, LBM-H, LBM-CK, and LBM-A were similar. ♦ Lean body mass estimated by the new equations based on MAMC and HGS was correlated with LBM obtained by DEXA and may serve as practical surrogate markers of LBM in PD patients. Copyright © 2015 International Society for Peritoneal Dialysis.

  18. An Efficient Deterministic Approach to Model-based Prediction Uncertainty Estimation

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Saxena, Abhinav; Goebel, Kai

    2012-01-01

    Prognostics deals with the prediction of the end of life (EOL) of a system. EOL is a random variable, due to the presence of process noise and uncertainty in the future inputs to the system. Prognostics algorithm must account for this inherent uncertainty. In addition, these algorithms never know exactly the state of the system at the desired time of prediction, or the exact model describing the future evolution of the system, accumulating additional uncertainty into the predicted EOL. Prediction algorithms that do not account for these sources of uncertainty are misrepresenting the EOL and can lead to poor decisions based on their results. In this paper, we explore the impact of uncertainty in the prediction problem. We develop a general model-based prediction algorithm that incorporates these sources of uncertainty, and propose a novel approach to efficiently handle uncertainty in the future input trajectories of a system by using the unscented transformation. Using this approach, we are not only able to reduce the computational load but also estimate the bounds of uncertainty in a deterministic manner, which can be useful to consider during decision-making. Using a lithium-ion battery as a case study, we perform several simulation-based experiments to explore these issues, and validate the overall approach using experimental data from a battery testbed.

  19. Assessment of predictive performance in incomplete data by combining internal validation and multiple imputation.

    PubMed

    Wahl, Simone; Boulesteix, Anne-Laure; Zierer, Astrid; Thorand, Barbara; van de Wiel, Mark A

    2016-10-26

    Missing values are a frequent issue in human studies. In many situations, multiple imputation (MI) is an appropriate missing data handling strategy, whereby missing values are imputed multiple times, the analysis is performed in every imputed data set, and the obtained estimates are pooled. If the aim is to estimate (added) predictive performance measures, such as (change in) the area under the receiver-operating characteristic curve (AUC), internal validation strategies become desirable in order to correct for optimism. It is not fully understood how internal validation should be combined with multiple imputation. In a comprehensive simulation study and in a real data set based on blood markers as predictors for mortality, we compare three combination strategies: Val-MI, internal validation followed by MI on the training and test parts separately, MI-Val, MI on the full data set followed by internal validation, and MI(-y)-Val, MI on the full data set omitting the outcome followed by internal validation. Different validation strategies, including bootstrap und cross-validation, different (added) performance measures, and various data characteristics are considered, and the strategies are evaluated with regard to bias and mean squared error of the obtained performance estimates. In addition, we elaborate on the number of resamples and imputations to be used, and adopt a strategy for confidence interval construction to incomplete data. Internal validation is essential in order to avoid optimism, with the bootstrap 0.632+ estimate representing a reliable method to correct for optimism. While estimates obtained by MI-Val are optimistically biased, those obtained by MI(-y)-Val tend to be pessimistic in the presence of a true underlying effect. Val-MI provides largely unbiased estimates, with a slight pessimistic bias with increasing true effect size, number of covariates and decreasing sample size. In Val-MI, accuracy of the estimate is more strongly improved by

  20. Validation of prediction equations for estimating resting energy expenditure in obese Chinese children.

    PubMed

    Chan, Dorothy F Y; Li, Albert M; Chan, Michael H M; So, Hung Kwan; Chan, Iris H S; Yin, Jane A T; Lam, Christopher W K; Fok, Tai Fai; Nelson, Edmund A S

    2009-01-01

    (1) To examine the validity of existing prediction equations (PREE) for estimating resting energy expenditure (REE) in obese Chinese children, (2) to correlate the measured REE (MREE) with anthropometric and biochemical parameters and (3) to derive a new PREE for local use. Cross-sectional study. 100 obese children (71 boys) were studied. All subjects underwent physical examination and anthropometric measurement. Upper and central body fat distribution was signified by centrality and conicity index respectively, and REE was measured by indirect calorimetry. Fat free mass (FFM) were measured by DEXA scan. Thirteen existing prediction equations for estimating REE were compared with MREE among these obese children. Fasting blood for glucose, lipid profile and insulin were obtained. The overall, male and female median MREEs were 7.1 mJ/d (IR 6.2-8.4), 7.3 mJ/d (IR 6.3-9.7) and 6.9 mJ/d (IR 5.6-8.1) respectively. No sex difference was noted in MREE (p=0.203). Most of the equations except Schofield equation underestimated REE of our children. By multiple linear regression, MREE was positively correlated with FFM (p<0.0001), conicity index (p<0.001) and centrality index (p=0.001). A new equation for estimating REE for local use was derived as: REE=(17.4*logFFM)+(11.4*conicity index)-(2.4*centrality index)-31.3. The mean difference of new PREE-MREE was -0.011 mJ/d (SD 1.51) with an interclass correlation coefficient of 0.91. None of the existing prediction equations were accurate in their estimation of REE, when applied to obese Chinese children. A new prediction equation has been derived for local use.

  1. A web-based team-oriented medical error communication assessment tool: development, preliminary reliability, validity, and user ratings.

    PubMed

    Kim, Sara; Brock, Doug; Prouty, Carolyn D; Odegard, Peggy Soule; Shannon, Sarah E; Robins, Lynne; Boggs, Jim G; Clark, Fiona J; Gallagher, Thomas

    2011-01-01

    Multiple-choice exams are not well suited for assessing communication skills. Standardized patient assessments are costly and patient and peer assessments are often biased. Web-based assessment using video content offers the possibility of reliable, valid, and cost-efficient means for measuring complex communication skills, including interprofessional communication. We report development of the Web-based Team-Oriented Medical Error Communication Assessment Tool, which uses videotaped cases for assessing skills in error disclosure and team communication. Steps in development included (a) defining communication behaviors, (b) creating scenarios, (c) developing scripts, (d) filming video with professional actors, and (e) writing assessment questions targeting team communication during planning and error disclosure. Using valid data from 78 participants in the intervention group, coefficient alpha estimates of internal consistency were calculated based on the Likert-scale questions and ranged from α=.79 to α=.89 for each set of 7 Likert-type discussion/planning items and from α=.70 to α=.86 for each set of 8 Likert-type disclosure items. The preliminary test-retest Pearson correlation based on the scores of the intervention group was r=.59 for discussion/planning and r=.25 for error disclosure sections, respectively. Content validity was established through reliance on empirically driven published principles of effective disclosure as well as integration of expert views across all aspects of the development process. In addition, data from 122 medicine and surgical physicians and nurses showed high ratings for video quality (4.3 of 5.0), acting (4.3), and case content (4.5). Web assessment of communication skills appears promising. Physicians and nurses across specialties respond favorably to the tool.

  2. Mass detection, localization and estimation for wind turbine blades based on statistical pattern recognition

    NASA Astrophysics Data System (ADS)

    Colone, L.; Hovgaard, M. K.; Glavind, L.; Brincker, R.

    2018-07-01

    A method for mass change detection on wind turbine blades using natural frequencies is presented. The approach is based on two statistical tests. The first test decides if there is a significant mass change and the second test is a statistical group classification based on Linear Discriminant Analysis. The frequencies are identified by means of Operational Modal Analysis using natural excitation. Based on the assumption of Gaussianity of the frequencies, a multi-class statistical model is developed by combining finite element model sensitivities in 10 classes of change location on the blade, the smallest area being 1/5 of the span. The method is experimentally validated for a full scale wind turbine blade in a test setup and loaded by natural wind. Mass change from natural causes was imitated with sand bags and the algorithm was observed to perform well with an experimental detection rate of 1, localization rate of 0.88 and mass estimation rate of 0.72.

  3. An automated multi-model based evapotranspiration estimation framework for understanding crop-climate interactions in India

    NASA Astrophysics Data System (ADS)

    Bhattarai, N.; Jain, M.; Mallick, K.

    2017-12-01

    A remote sensing based multi-model evapotranspiration (ET) estimation framework is developed using MODIS and NASA Merra-2 reanalysis data for data poor regions, and we apply this framework to the Indian subcontinent. The framework eliminates the need for in-situ calibration data and hence estimates ET completely from space and is replicable across all regions in the world. Currently, six surface energy balance models ranging from widely-used SEBAL, METRIC, and SEBS to moderately-used S-SEBI, SSEBop, and a relatively new model, STIC1.2 are being integrated and validated. Preliminary analysis suggests good predictability of the models for estimating near- real time ET under clear sky conditions from various crop types in India with coefficient of determination 0.32-0.55 and percent bias -15%-28%, when compared against Bowen Ratio based ET estimates. The results are particularly encouraging given that no direct ground input data were used in the analysis. The framework is currently being extended to estimate seasonal ET across the Indian subcontinent using a model-ensemble approach that uses all available MODIS 8-day datasets since 2000. These ET products are being used to monitor inter-seasonal and inter-annual dynamics of ET and crop water use across different crop and irrigation practices in India. Particularly, the potential impacts of changes in precipitation patterns and extreme heat (e.g., extreme degree days) on seasonal crop water consumption is being studied. Our ET products are able to locate the water stress hotspots that need to be targeted with water saving interventions to maintain agricultural production in the face of climate variability and change.

  4. Novel Equations for Estimating Lean Body Mass in Patients With Chronic Kidney Disease.

    PubMed

    Tian, Xue; Chen, Yuan; Yang, Zhi-Kai; Qu, Zhen; Dong, Jie

    2018-05-01

    Simplified methods to estimate lean body mass (LBM), an important nutritional measure representing muscle mass and somatic protein, are lacking in nondialyzed patients with chronic kidney disease (CKD). We developed and tested 2 reliable equations for estimation of LBM in daily clinical practice. The development and validation groups both included 150 nondialyzed patients with CKD Stages 3 to 5. Two equations for estimating LBM based on mid-arm muscle circumference (MAMC) or handgrip strength (HGS) were developed and validated in CKD patients with dual-energy x-ray absorptiometry as referenced gold method. We developed and validated 2 equations for estimating LBM based on HGS and MAMC. These equations, which also incorporated sex, height, and weight, were developed and validated in CKD patients. The new equations were found to exhibit only small biases when compared with dual-energy x-ray absorptiometry, with median differences of 0.94 and 0.46 kg observed in the HGS and MAMC equations, respectively. Good precision and accuracy were achieved for both equations, as reflected by small interquartile ranges in the differences and in the percentages of estimates that were 20% of measured LBM. The bias, precision, and accuracy of each equation were found to be similar when it was applied to groups of patients divided by the median measured LBM, the median ratio of extracellular to total body water, and the stages of CKD. LBM estimated from MAMC or HGS were found to provide accurate estimates of LBM in nondialyzed patients with CKD. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  5. Muscle parameters estimation based on biplanar radiography.

    PubMed

    Dubois, G; Rouch, P; Bonneau, D; Gennisson, J L; Skalli, W

    2016-11-01

    The evaluation of muscle and joint forces in vivo is still a challenge. Musculo-Skeletal (musculo-skeletal) models are used to compute forces based on movement analysis. Most of them are built from a scaled-generic model based on cadaver measurements, which provides a low level of personalization, or from Magnetic Resonance Images, which provide a personalized model in lying position. This study proposed an original two steps method to access a subject-specific musculo-skeletal model in 30 min, which is based solely on biplanar X-Rays. First, the subject-specific 3D geometry of bones and skin envelopes were reconstructed from biplanar X-Rays radiography. Then, 2200 corresponding control points were identified between a reference model and the subject-specific X-Rays model. Finally, the shape of 21 lower limb muscles was estimated using a non-linear transformation between the control points in order to fit the muscle shape of the reference model to the X-Rays model. Twelfth musculo-skeletal models were reconstructed and compared to their reference. The muscle volume was not accurately estimated with a standard deviation (SD) ranging from 10 to 68%. However, this method provided an accurate estimation the muscle line of action with a SD of the length difference lower than 2% and a positioning error lower than 20 mm. The moment arm was also well estimated with SD lower than 15% for most muscle, which was significantly better than scaled-generic model for most muscle. This method open the way to a quick modeling method for gait analysis based on biplanar radiography.

  6. Evaluating Satellite-based Rainfall Estimates for Basin-scale Hydrologic Modeling

    NASA Astrophysics Data System (ADS)

    Yilmaz, K. K.; Hogue, T. S.; Hsu, K.; Gupta, H. V.; Mahani, S. E.; Sorooshian, S.

    2003-12-01

    The reliability of any hydrologic simulation and basin outflow prediction effort depends primarily on the rainfall estimates. The problem of estimating rainfall becomes more obvious in basins with scarce or no rain gauges. We present an evaluation of satellite-based rainfall estimates for basin-scale hydrologic modeling with particular interest in ungauged basins. The initial phase of this study focuses on comparison of mean areal rainfall estimates from ground-based rain gauge network, NEXRAD radar Stage-III, and satellite-based PERSIANN (Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks) and their influence on hydrologic model simulations over several basins in the U.S. Six-hourly accumulations of the above competing mean areal rainfall estimates are used as input to the Sacramento Soil Moisture Accounting Model. Preliminary experiments for the Leaf River Basin in Mississippi, for the period of March 2000 - June 2002, reveals that seasonality plays an important role in the comparison. There is an overestimation during the summer and underestimation during the winter in satellite-based rainfall with respect to the competing rainfall estimates. The consequence of this result on the hydrologic model is that simulated discharge underestimates the major observed peak discharges during early spring for the basin under study. Future research will entail developing correction procedures, which depend on different factors such as seasonality, geographic location and basin size, for satellite-based rainfall estimates over basins with dense rain gauge network and/or radar coverage. Extension of these correction procedures to satellite-based rainfall estimates over ungauged basins with similar characteristics has the potential for reducing the input uncertainty in ungauged basin modeling efforts.

  7. Manifold absolute pressure estimation using neural network with hybrid training algorithm

    PubMed Central

    Selamat, Hazlina; Alimin, Ahmad Jais; Haniff, Mohamad Fadzli

    2017-01-01

    In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP) sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM) algorithm, Bayesian Regularization (BR) algorithm and Particle Swarm Optimization (PSO) algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS). The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value. PMID:29190779

  8. Clinical validation of an algorithm to correct the error in the keratometric estimation of corneal power in normal eyes.

    PubMed

    Piñero, David P; Camps, Vicente J; Mateo, Verónica; Ruiz-Fortes, Pedro

    2012-08-01

    To validate clinically in a normal healthy population an algorithm to correct the error in the keratometric estimation of corneal power based on the use of a variable keratometric index of refraction (n(k)). Medimar International Hospital (Oftalmar) and University of Alicante, Alicante, Spain. Case series. Corneal power was measured with a Scheimpflug photography-based system (Pentacam software version 1.14r01) in healthy eyes with no previous ocular surgery. In all cases, keratometric corneal power was also estimated using an adjusted value of n(k) that is dependent on the anterior corneal radius (r(1c)) as follows: n(kadj) = -0.0064286 r(1c) +1.37688. Agreement between the Gaussian (P(c)(Gauss)) and adjusted keratometric (P(kadj)) corneal power values was evaluated. The study evaluated 92 eyes (92 patients; age range 15 to 64 years). The mean difference between P(c)(Gauss) and P(kadj) was -0.02 diopter (D) ± 0.22 (SD) (P=.43). A very strong, statistically significant correlation was found between both corneal powers (r = .994, P<.01). The range of agreement between P(c)(Gauss) and P(kadj) was 0.44 D, with limits of agreement of -0.46 and +0.42 D. In addition, a very strong, statistically significant correlation of the difference between P(c)(Gauss) and P(kadj) and the posterior corneal radius was found (r = 0.96, P<.01). The imprecision in the calculation of corneal power using keratometric estimation can be minimized in clinical practice by using a variable keratometric index that depends on the radius of the anterior corneal surface. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  9. An examination of healthy aging across a conceptual continuum: prevalence estimates, demographic patterns, and validity.

    PubMed

    McLaughlin, Sara J; Jette, Alan M; Connell, Cathleen M

    2012-06-01

    Although the notion of healthy aging has gained wide acceptance in gerontology, measuring the phenomenon is challenging. Guided by a prominent conceptualization of healthy aging, we examined how shifting from a more to less stringent definition of healthy aging influences prevalence estimates, demographic patterns, and validity. Data are from adults aged 65 years and older who participated in the Health and Retirement Study. We examined four operational definitions of healthy aging. For each, we calculated prevalence estimates and examined the odds of healthy aging by age, education, gender, and race-ethnicity in 2006. We also examined the association between healthy aging and both self-rated health and death. Across definitions, the prevalence of healthy aging ranged from 3.3% to 35.5%. For all definitions, those classified as experiencing healthy aging had lower odds of fair or poor self-rated health and death over an 8-year period. The odds of being classified as "healthy" were lower among those of advanced age, those with less education, and women than for their corresponding counterparts across all definitions. Moving across the conceptual continuum--from a more to less rigid definition of healthy aging--markedly increases the measured prevalence of healthy aging. Importantly, results suggest that all examined definitions identified a subgroup of older adults who had substantially lower odds of reporting fair or poor health and dying over an 8-year period, providing evidence of the validity of our definitions. Conceptualizations that emphasize symptomatic disease and functional health may be particularly useful for public health purposes.

  10. Cooperative Learning: Improving University Instruction by Basing Practice on Validated Theory

    ERIC Educational Resources Information Center

    Johnson, David W.; Johnson, Roger T.; Smith, Karl A.

    2014-01-01

    Cooperative learning is an example of how theory validated by research may be applied to instructional practice. The major theoretical base for cooperative learning is social interdependence theory. It provides clear definitions of cooperative, competitive, and individualistic learning. Hundreds of research studies have validated its basic…

  11. Estimating monthly temperature using point based interpolation techniques

    NASA Astrophysics Data System (ADS)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  12. Validation study of the Tanaka and Kawasaki equations to estimate the daily sodium excretion by a spot urine sample.

    PubMed

    Mill, José Geraldo; Rodrigues, Sérgio Lamêgo; Baldo, Marcelo Perim; Malta, Deborah Carvalho; Szwarcwald, Celia Landmann

    2015-12-01

    To validate Tanaka and Kawasaki's formulas to calculate the salt intake by the sodium/creatinine ratio in spot of urine. Two hundred and seventy two adults (20 - 69 years old; 52.6% women) with 24 h urine collection and two urinary spots collected on the same day (while fasting - spot 1 - or not fasting - spot 2). Anthropometry, blood pressure and fasting blood were measured on the same day. The analysis of agreement between salt consumption measured in the 24 h urine test and urinary spots were determined by the Pearson's correlation (r) and the Bland & Altman method. The mean salt consumption measured by the 24 h sodium excretion was 10.4 ± 5.3 g/day. The correlation between the measured 24 h sodium excretion and the estimation based on spots 1 and 2, respectively, was only moderated according to Tanaka (r = 0.51 and r = 0.55; p < 0.001) and to Kawasaki (r = 0.52 and r = 0.54; p < 0.001). We observed an increasing underestimation of salt consumption by Tanaka to increasing salt consumption and conversely, an overestimation of consumption by the Kawasaki formula. The estimation of salt consumption (difference between measured and calculated salt consumption lower than 1 g/day) was adequate only when the consumption was between 9 - 12 g/day (Tanaka) and 12 - 18 g/day (Kawasaki). Spot urine sampling is adequate to estimate salt consumption only among individuals with an actual consumption near the population mean.

  13. Development and Validation of Different Ultraviolet-Spectrophotometric Methods for the Estimation of Besifloxacin in Different Simulated Body Fluids.

    PubMed

    Singh, C L; Singh, A; Kumar, S; Kumar, M; Sharma, P K; Majumdar, D K

    2015-01-01

    In the present study a simple, accurate, precise, economical and specific UV-spectrophotometric method for estimation of besifloxacin in bulk and in different pharmaceutical formulation has been developed. The drug shows maximum λmax289 nm in distilled water, simulated tears and phosphate buffer saline. The linearity range of developed methods were in the range of 3-30 μg/ml of drug with a correlation coefficient (r(2)) 0.9992, 0.9989 and 0.9984 with respect to distilled water, simulated tears and phosphate buffer saline, respectively. Reproducibility by repeating methods as %RSD were found to be less than 2%. The limit of detection in different media was found to be 0.62, 0.72 and 0.88 μg/ml, respectively. The limit of quantification was found to be 1.88, 2.10, 2.60 μg/ml, respectively. The proposed method was validated statically according to International Conference on Harmonization guidelines with respect to specificity, linearity, range, accuracy, precision and robustness. The proposed methods of validation were found to be accurate and highly specific for the estimation of besifloxacin in different pharmaceutical formulations.

  14. Evaluation of TRMM Ground-Validation Radar-Rain Errors Using Rain Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Wolff, David B.

    2009-01-01

    Ground-validation (GV) radar-rain products are often utilized for validation of the Tropical Rainfall Measuring Mission (TRMM) spaced-based rain estimates, and hence, quantitative evaluation of the GV radar-rain product error characteristics is vital. This study uses quality-controlled gauge data to compare with TRMM GV radar rain rates in an effort to provide such error characteristics. The results show that significant differences of concurrent radar-gauge rain rates exist at various time scales ranging from 5 min to 1 day, despite lower overall long-term bias. However, the differences between the radar area-averaged rain rates and gauge point rain rates cannot be explained as due to radar error only. The error variance separation method is adapted to partition the variance of radar-gauge differences into the gauge area-point error variance and radar rain estimation error variance. The results provide relatively reliable quantitative uncertainty evaluation of TRMM GV radar rain estimates at various times scales, and are helpful to better understand the differences between measured radar and gauge rain rates. It is envisaged that this study will contribute to better utilization of GV radar rain products to validate versatile spaced-based rain estimates from TRMM, as well as the proposed Global Precipitation Measurement, and other satellites.

  15. Age Estimation Based on Children's Voice: A Fuzzy-Based Decision Fusion Strategy

    PubMed Central

    Ting, Hua-Nong

    2014-01-01

    Automatic estimation of a speaker's age is a challenging research topic in the area of speech analysis. In this paper, a novel approach to estimate a speaker's age is presented. The method features a “divide and conquer” strategy wherein the speech data are divided into six groups based on the vowel classes. There are two reasons behind this strategy. First, reduction in the complicated distribution of the processing data improves the classifier's learning performance. Second, different vowel classes contain complementary information for age estimation. Mel-frequency cepstral coefficients are computed for each group and single layer feed-forward neural networks based on self-adaptive extreme learning machine are applied to the features to make a primary decision. Subsequently, fuzzy data fusion is employed to provide an overall decision by aggregating the classifier's outputs. The results are then compared with a number of state-of-the-art age estimation methods. Experiments conducted based on six age groups including children aged between 7 and 12 years revealed that fuzzy fusion of the classifier's outputs resulted in considerable improvement of up to 53.33% in age estimation accuracy. Moreover, the fuzzy fusion of decisions aggregated the complementary information of a speaker's age from various speech sources. PMID:25006595

  16. A validated stability indicating RP-HPLC method for estimation of Armodafinil in pharmaceutical dosage forms and characterization of its base hydrolytic product.

    PubMed

    Venkateswarlu, Kambham; Rangareddy, Ardhgeri; Narasimhaiah, Kanaka; Sharma, Hemraj; Bandi, Naga Mallikarjuna Raja

    2017-01-01

    The main objective of present study was to develop a RP-HPLC method for estimation of Armodafinil in pharmaceutical dosage forms and characterization of its base hydrolytic product. The method was developed for Armodafinil estimation and base hydrolytic products were characterized. The separation was carried out on C18 column by using mobile phase as mixture of water and methanol (45:55%v/v). Eluents were detected at 220nm at 1ml/min. Stress studies were performed with milder conditions followed by stronger conditions so as to get sufficient degradation around 20%. A total of five degradation products were detected and separated from analyte. The linearity of the proposed method was investigated in the range of 20-120µg/ml for Armodafinil. The detection limit and quantification limit was found to be 0.01183μg/ml and 0.035µg/ml respectively. The precision % RSD was found to be less than 2% and the recovery was between 98-102%. Armodafinil was found to be more sensitive to the base hydrolysis and yielded its carboxylic acid as degradant. The developed method was stability indicating assay, suitable to quantify Armodafinil in presence of possible degradants. The drug was sensitive to acid, base &photolytic stress and resistant to thermal &oxidation.

  17. Validation of cystatin C-based equations for evaluating residual renal function in patients on continuous ambulatory peritoneal dialysis.

    PubMed

    Zhong, Hui; Zhang, Wei; Qin, Min; Gou, ZhongPing; Feng, Ping

    2017-06-01

    Residual renal function needs to be assessed frequently in patients on continuous ambulatory peritoneal dialysis (CAPD). A commonly used method is to measure creatinine (Cr) and urea clearance in urine collected over 24 h, but collection can be cumbersome and difficult to manage. A faster, simpler alternative is to measure levels of cystatin C (CysC) in serum, but the accuracy and reliability of this method is controversial. Our study aims to validate published CysC-based equations for estimating residual renal function in patients on CAPD. Residual renal function was measured by calculating average clearance of urea and Cr in 24-h urine as well as by applying CysC- or Cr-based equations published by Hoek and Yang. We then compared the performance of the equations against the 24-h urine results. In our sample of 255 patients ages 47.9 ± 15.6 years, the serum CysC level was 6.43 ± 1.13 mg/L. Serum CysC level was not significantly associated with age, gender, height, weight, body mass index, hemoglobin, intact parathyroid hormone, normalized protein catabolic rate or the presence of diabetes. In contrast, serum CysC levels did correlate with peritoneal clearance of CysC and with levels of prealbumin and high-sensitivity C-reactive protein. Residual renal function was 2.56 ± 2.07 mL/min/1.73 m 2 based on 24-h urine sampling, compared with estimates (mL/min/1.73 m 2 ) of 2.98 ± 0.66 for Hoek's equation, 2.03 ± 0.97 for Yang's CysC-based equation and 2.70 ± 1.30 for Yang's Cr-based equation. Accuracies within 30%/50% of measured residual renal function for the three equations were 29.02/48.24, 34.90/56.86 and 31.37/54.90. The three equations for estimating residual renal function showed similar limits of agreement and differed significantly from the measured value. Published CysC-based equations do not appear to be particularly reliable for patients on CAPD. Further development and validation of CysC-based equations should take into account peritoneal clearance of

  18. Constitutive error based parameter estimation technique for plate structures using free vibration signatures

    NASA Astrophysics Data System (ADS)

    Guchhait, Shyamal; Banerjee, Biswanath

    2018-04-01

    In this paper, a variant of constitutive equation error based material parameter estimation procedure for linear elastic plates is developed from partially measured free vibration sig-natures. It has been reported in many research articles that the mode shape curvatures are much more sensitive compared to mode shape themselves to localize inhomogeneity. Complying with this idea, an identification procedure is framed as an optimization problem where the proposed cost function measures the error in constitutive relation due to incompatible curvature/strain and moment/stress fields. Unlike standard constitutive equation error based procedure wherein a solution of a couple system is unavoidable in each iteration, we generate these incompatible fields via two linear solves. A simple, yet effective, penalty based approach is followed to incorporate measured data. The penalization parameter not only helps in incorporating corrupted measurement data weakly but also acts as a regularizer against the ill-posedness of the inverse problem. Explicit linear update formulas are then developed for anisotropic linear elastic material. Numerical examples are provided to show the applicability of the proposed technique. Finally, an experimental validation is also provided.

  19. Model-based estimation for dynamic cardiac studies using ECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.

    1994-06-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performancemore » to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed.« less

  20. Validity and reliability of Internet-based physiotherapy assessment for musculoskeletal disorders: a systematic review.

    PubMed

    Mani, Suresh; Sharma, Shobha; Omar, Baharudin; Paungmali, Aatit; Joseph, Leonard

    2017-04-01

    Purpose The purpose of this review is to systematically explore and summarise the validity and reliability of telerehabilitation (TR)-based physiotherapy assessment for musculoskeletal disorders. Method A comprehensive systematic literature review was conducted using a number of electronic databases: PubMed, EMBASE, PsycINFO, Cochrane Library and CINAHL, published between January 2000 and May 2015. The studies examined the validity, inter- and intra-rater reliabilities of TR-based physiotherapy assessment for musculoskeletal conditions were included. Two independent reviewers used the Quality Appraisal Tool for studies of diagnostic Reliability (QAREL) and the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool to assess the methodological quality of reliability and validity studies respectively. Results A total of 898 hits were achieved, of which 11 articles based on inclusion criteria were reviewed. Nine studies explored the concurrent validity, inter- and intra-rater reliabilities, while two studies examined only the concurrent validity. Reviewed studies were moderate to good in methodological quality. The physiotherapy assessments such as pain, swelling, range of motion, muscle strength, balance, gait and functional assessment demonstrated good concurrent validity. However, the reported concurrent validity of lumbar spine posture, special orthopaedic tests, neurodynamic tests and scar assessments ranged from low to moderate. Conclusion TR-based physiotherapy assessment was technically feasible with overall good concurrent validity and excellent reliability, except for lumbar spine posture, orthopaedic special tests, neurodynamic testa and scar assessment.

  1. Estimation et validation des derivees de stabilite et controle du modele dynamique non-lineaire d'un drone a voilure fixe

    NASA Astrophysics Data System (ADS)

    Courchesne, Samuel

    Knowledge of the dynamic characteristics of a fixed-wing UAV is necessary to design flight control laws and to conceive a high quality flight simulator. The basic features of a flight mechanic model include the properties of mass, inertia and major aerodynamic terms. They respond to a complex process involving various numerical analysis techniques and experimental procedures. This thesis focuses on the analysis of estimation techniques applied to estimate problems of stability and control derivatives from flight test data provided by an experimental UAV. To achieve this objective, a modern identification methodology (Quad-M) is used to coordinate the processing tasks from multidisciplinary fields, such as parameter estimation modeling, instrumentation, the definition of flight maneuvers and validation. The system under study is a non-linear model with six degrees of freedom with a linear aerodynamic model. The time domain techniques are used for identification of the drone. The first technique, the equation error method is used to determine the structure of the aerodynamic model. Thereafter, the output error method and filter error method are used to estimate the aerodynamic coefficients values. The Matlab scripts for estimating the parameters obtained from the American Institute of Aeronautics and Astronautics (AIAA) are used and modified as necessary to achieve the desired results. A commendable effort in this part of research is devoted to the design of experiments. This includes an awareness of the system data acquisition onboard and the definition of flight maneuvers. The flight tests were conducted under stable flight conditions and with low atmospheric disturbance. Nevertheless, the identification results showed that the filter error method is most effective for estimating the parameters of the drone due to the presence of process noise and measurement. The aerodynamic coefficients are validated using a numerical analysis of the vortex method. In addition, a

  2. The validation study on a three-dimensional burn estimation smart-phone application: accurate, free and fast?

    PubMed

    Cheah, A K W; Kangkorn, T; Tan, E H; Loo, M L; Chong, S J

    2018-01-01

    Accurate total body surface area burned (TBSAB) estimation is a crucial aspect of early burn management. It helps guide resuscitation and is essential in the calculation of fluid requirements. Conventional methods of estimation can often lead to large discrepancies in burn percentage estimation. We aim to compare a new method of TBSAB estimation using a three-dimensional smart-phone application named 3D Burn Resuscitation (3D Burn) against conventional methods of estimation-Rule of Palm, Rule of Nines and the Lund and Browder chart. Three volunteer subjects were moulaged with simulated burn injuries of 25%, 30% and 35% total body surface area (TBSA), respectively. Various healthcare workers were invited to use both the 3D Burn application as well as the conventional methods stated above to estimate the volunteer subjects' burn percentages. Collective relative estimations across the groups showed that when used, the Rule of Palm, Rule of Nines and the Lund and Browder chart all over-estimated burns area by an average of 10.6%, 19.7%, and 8.3% TBSA, respectively, while the 3D Burn application under-estimated burns by an average of 1.9%. There was a statistically significant difference between the 3D Burn application estimations versus all three other modalities ( p  < 0.05). Time of using the application was found to be significantly longer than traditional methods of estimation. The 3D Burn application, although slower, allowed more accurate TBSAB measurements when compared to conventional methods. The validation study has shown that the 3D Burn application is useful in improving the accuracy of TBSAB measurement. Further studies are warranted, and there are plans to repeat the above study in a different centre overseas as part of a multi-centre study, with a view of progressing to a prospective study that compares the accuracy of the 3D Burn application against conventional methods on actual burn patients.

  3. The Validation of a Case-Based, Cumulative Assessment and Progressions Examination

    PubMed Central

    Coker, Adeola O.; Copeland, Jeffrey T.; Gottlieb, Helmut B.; Horlen, Cheryl; Smith, Helen E.; Urteaga, Elizabeth M.; Ramsinghani, Sushma; Zertuche, Alejandra; Maize, David

    2016-01-01

    Objective. To assess content and criterion validity, as well as reliability of an internally developed, case-based, cumulative, high-stakes third-year Annual Student Assessment and Progression Examination (P3 ASAP Exam). Methods. Content validity was assessed through the writing-reviewing process. Criterion validity was assessed by comparing student scores on the P3 ASAP Exam with the nationally validated Pharmacy Curriculum Outcomes Assessment (PCOA). Reliability was assessed with psychometric analysis comparing student performance over four years. Results. The P3 ASAP Exam showed content validity through representation of didactic courses and professional outcomes. Similar scores on the P3 ASAP Exam and PCOA with Pearson correlation coefficient established criterion validity. Consistent student performance using Kuder-Richardson coefficient (KR-20) since 2012 reflected reliability of the examination. Conclusion. Pharmacy schools can implement internally developed, high-stakes, cumulative progression examinations that are valid and reliable using a robust writing-reviewing process and psychometric analyses. PMID:26941435

  4. Estimating chronic hepatitis C prognosis using transient elastography-based liver stiffness: A systematic review and meta-analysis.

    PubMed

    Erman, A; Sathya, A; Nam, A; Bielecki, J M; Feld, J J; Thein, H-H; Wong, W W L; Grootendorst, P; Krahn, M D

    2018-05-01

    Chronic hepatitis C (CHC) is a leading cause of hepatic fibrosis and cirrhosis. The level of fibrosis is traditionally established by histology, and prognosis is estimated using fibrosis progression rates (FPRs; annual probability of progressing across histological stages). However, newer noninvasive alternatives are quickly replacing biopsy. One alternative, transient elastography (TE), quantifies fibrosis by measuring liver stiffness (LSM). Given these developments, the purpose of this study was (i) to estimate prognosis in treatment-naïve CHC patients using TE-based liver stiffness progression rates (LSPR) as an alternative to FPRs and (ii) to compare consistency between LSPRs and FPRs. A systematic literature search was performed using multiple databases (January 1990 to February 2016). LSPRs were calculated using either a direct method (given the difference in serial LSMs and time elapsed) or an indirect method given a single LSM and the estimated duration of infection and pooled using random-effects meta-analyses. For validation purposes, FPRs were also estimated. Heterogeneity was explored by random-effects meta-regression. Twenty-seven studies reporting on 39 groups of patients (N = 5874) were identified with 35 groups allowing for indirect and 8 for direct estimation of LSPR. The majority (~58%) of patients were HIV/HCV-coinfected. The estimated time-to-cirrhosis based on TE vs biopsy was 39 and 38 years, respectively. In univariate meta-regressions, male sex and HIV were positively and age at assessment, negatively associated with LSPRs. Noninvasive prognosis of HCV is consistent with FPRs in predicting time-to-cirrhosis, but more longitudinal studies of liver stiffness are needed to obtain refined estimates. © 2017 John Wiley & Sons Ltd.

  5. Ground validation of DPR precipitation rate over Italy using H-SAF validation methodology

    NASA Astrophysics Data System (ADS)

    Puca, Silvia; Petracca, Marco; Sebastianelli, Stefano; Vulpiani, Gianfranco

    2017-04-01

    The H-SAF project (Satellite Application Facility on support to Operational Hydrology and Water Management, funded by EUMETSAT) is aimed at retrieving key hydrological parameters such as precipitation, soil moisture and snow cover. Within the H-SAF consortium, the Product Precipitation Validation Group (PPVG) evaluate the accuracy of instantaneous and accumulated precipitation products with respect to ground radar and rain gauge data adopting the same methodology (using a Unique Common Code) throughout Europe. The adopted validation methodology can be summarized by the following few steps: (1) ground data (radar and rain gauge) quality control; (2) spatial interpolation of rain gauge measurements; (3) up-scaling of radar data to satellite native grid; (4) temporal comparison of satellite and ground-based precipitation products; and (5) production and evaluation of continuous and multi-categorical statistical scores for long time series and case studies. The statistical scores are evaluated taking into account the satellite product native grid. With the recent advent of the GPM era starting in march 2014, more new global precipitation products are available. The validation methodology developed in H-SAF can be easily applicable to different precipitation products. In this work, we have validated instantaneous precipitation data estimated from DPR (Dual-frequency Precipitation Radar) instrument onboard of the GPM-CO (Global Precipitation Measurement Core Observatory) satellite. In particular, we have analyzed the near surface and estimated precipitation fields collected in the 2A-Level for 3 different scans (NS, MS and HS). The Italian radar mosaic managed by the National Department of Civil Protection available operationally every 10 minutes is used as ground reference data. The results obtained highlight the capability of the DPR to identify properly the precipitation areas with higher accuracy in estimating the stratiform precipitation (especially for the HS). An

  6. Confidence in outcome estimates from systematic reviews used in informed consent.

    PubMed

    Fritz, Robert; Bauer, Janet G; Spackman, Sue S; Bains, Amanjyot K; Jetton-Rangel, Jeanette

    2016-12-01

    Evidence-based dentistry now guides informed consent in which clinicians are obliged to provide patients with the most current, best evidence, or best estimates of outcomes, of regimens, therapies, treatments, procedures, materials, and equipment or devices when developing personal oral health care, treatment plans. Yet, clinicians require that the estimates provided from systematic reviews be verified to their validity, reliability, and contextualized as to performance competency so that clinicians may have confidence in explaining outcomes to patients in clinical practice. The purpose of this paper was to describe types of informed estimates from which clinicians may have confidence in their capacity to assist patients in competent decision-making, one of the most important concepts of informed consent. Using systematic review methodology, researchers provide clinicians with valid best estimates of outcomes regarding a subject of interest from best evidence. Best evidence is verified through critical appraisals using acceptable sampling methodology either by scoring instruments (Timmer analysis) or checklist (grade), a Cochrane Collaboration standard that allows transparency in open reviews. These valid best estimates are then tested for reliability using large databases. Finally, valid and reliable best estimates are assessed for meaning using quantification of margins and uncertainties. Through manufacturer and researcher specifications, quantification of margins and uncertainties develops a performance competency continuum by which valid, reliable best estimates may be contextualized for their performance competency: at a lowest margin performance competency (structural failure), high margin performance competency (estimated true value of success), or clinically determined critical values (clinical failure). Informed consent may be achieved when clinicians are confident of their ability to provide useful and accurate best estimates of outcomes regarding

  7. Validation Database Based Thermal Analysis of an Advanced RPS Concept

    NASA Technical Reports Server (NTRS)

    Balint, Tibor S.; Emis, Nickolas D.

    2006-01-01

    Advanced RPS concepts can be conceived, designed and assessed using high-end computational analysis tools. These predictions may provide an initial insight into the potential performance of these models, but verification and validation are necessary and required steps to gain confidence in the numerical analysis results. This paper discusses the findings from a numerical validation exercise for a small advanced RPS concept, based on a thermal analysis methodology developed at JPL and on a validation database obtained from experiments performed at Oregon State University. Both the numerical and experimental configurations utilized a single GPHS module enabled design, resembling a Mod-RTG concept. The analysis focused on operating and environmental conditions during the storage phase only. This validation exercise helped to refine key thermal analysis and modeling parameters, such as heat transfer coefficients, and conductivity and radiation heat transfer values. Improved understanding of the Mod-RTG concept through validation of the thermal model allows for future improvements to this power system concept.

  8. Estimating salinity stress in sugarcane fields with spaceborne hyperspectral vegetation indices

    NASA Astrophysics Data System (ADS)

    Hamzeh, S.; Naseri, A. A.; AlaviPanah, S. K.; Mojaradi, B.; Bartholomeus, H. M.; Clevers, J. G. P. W.; Behzad, M.

    2013-04-01

    The presence of salt in the soil profile negatively affects the growth and development of vegetation. As a result, the spectral reflectance of vegetation canopies varies for different salinity levels. This research was conducted to (1) investigate the capability of satellite-based hyperspectral vegetation indices (VIs) for estimating soil salinity in agricultural fields, (2) evaluate the performance of 21 existing VIs and (3) develop new VIs based on a combination of wavelengths sensitive for multiple stresses and find the best one for estimating soil salinity. For this purpose a Hyperion image of September 2, 2010, and data on soil salinity at 108 locations in sugarcane (Saccharum officina L.) fields were used. Results show that soil salinity could well be estimated by some of these VIs. Indices related to chlorophyll absorption bands or based on a combination of chlorophyll and water absorption bands had the highest correlation with soil salinity. In contrast, indices that are only based on water absorption bands had low to medium correlations, while indices that use only visible bands did not perform well. From the investigated indices the optimized soil-adjusted vegetation index (OSAVI) had the strongest relationship (R2 = 0.69) with soil salinity for the training data, but it did not perform well in the validation phase. The validation procedure showed that the new salinity and water stress indices (SWSI) implemented in this study (SWSI-1, SWSI-2, SWSI-3) and the Vogelmann red edge index yielded the best results for estimating soil salinity for independent fields with root mean square errors of 1.14, 1.15, 1.17 and 1.15 dS/m, respectively. Our results show that soil salinity could be estimated by satellite-based hyperspectral VIs, but validation of obtained models for independent data is essential for selecting the best model.

  9. Landsat based historical (1984-2014) crop water use estimates and trends in the Southwestern United States

    NASA Astrophysics Data System (ADS)

    Senay, G. B.; Schauer, M.; Friedrichs, M.; Velpuri, N. M.; Singh, R. K.

    2016-12-01

    Remote sensing-based field scale evapotranspiration (ET) maps are useful for characterizing water use patterns and assessing crop performance. Historical (1984-2014) Landsat-based ET maps were generated for major irrigation districts in the southwestern US. A total of 3,396 Landsat images were processed using the Operational Simplified Surface Energy balance (SSEBop) model that integrates weather and remotely sensed images to estimate monthly and annual ET within the study areas over the 31 years. Model output evaluation and validation using point-based eddy covariance flux tower, gridded-flux data and water balance ET approaches indicated relatively strong association between SSEBop ET and validation datasets. Historical trend analysis of seven agro-hydrologic variables using the Seasonal Mann-Kendall test showed interesting results. In a pair wise comparison, management influenced variables such as actual evapotranspiration (ETa), land surface temperature (Ts) and runoff (Q) were found to be more variable than their corresponding climate counterparts of atmospheric water demand (ETo), air temperature (Ta) and precipitation, responding to the impacts of management decisions. Our results indicated that only air temperature showed a consistent increase (up to 1.2 K) across all 9 irrigation sub-basins during the 31 years. District-wide ETa estimates were used to compute historical crop water use volumes and monetary savings for the Palo Verde Irrigation district (PVID). During the peak crop fallowing program in PVID, the water savings reached a maximum of 85,000 ac-ft per year which is equivalent to a dollar amount of $ 600 million. This study has many applications in planning water resource allocation, monitoring and assessing water usage and performance, and quantifying impacts of climate and land use/land cover changes on water resources. With increased computational efficiency and model development, similar studies can be conducted in other parts of the world.

  10. Fuzzy-logic based strategy for validation of multiplex methods: example with qualitative GMO assays.

    PubMed

    Bellocchi, Gianni; Bertholet, Vincent; Hamels, Sandrine; Moens, W; Remacle, José; Van den Eede, Guy

    2010-02-01

    This paper illustrates the advantages that a fuzzy-based aggregation method could bring into the validation of a multiplex method for GMO detection (DualChip GMO kit, Eppendorf). Guidelines for validation of chemical, bio-chemical, pharmaceutical and genetic methods have been developed and ad hoc validation statistics are available and routinely used, for in-house and inter-laboratory testing, and decision-making. Fuzzy logic allows summarising the information obtained by independent validation statistics into one synthetic indicator of overall method performance. The microarray technology, introduced for simultaneous identification of multiple GMOs, poses specific validation issues (patterns of performance for a variety of GMOs at different concentrations). A fuzzy-based indicator for overall evaluation is illustrated in this paper, and applied to validation data for different genetically modified elements. Remarks were drawn on the analytical results. The fuzzy-logic based rules were shown to be applicable to improve interpretation of results and facilitate overall evaluation of the multiplex method.

  11. Validation of travel times to hospital estimated by GIS.

    PubMed

    Haynes, Robin; Jones, Andrew P; Sauerzapf, Violet; Zhao, Hongxin

    2006-09-19

    An increasing number of studies use GIS estimates of car travel times to health services, without presenting any evidence that the estimates are representative of real travel times. This investigation compared GIS estimates of travel times with the actual times reported by a sample of 475 cancer patients who had travelled by car to attend clinics at eight hospitals in the North of England. Car travel times were estimated by GIS using the shortest road route between home address and hospital and average speed assumptions. These estimates were compared with reported journey times and straight line distances using graphical, correlation and regression techniques. There was a moderately strong association between reported times and estimated travel times (r = 0.856). Reported travel times were similarly related to straight line distances. Altogether, 50% of travel time estimates were within five minutes of the time reported by respondents, 77% were within ten minutes and 90% were within fifteen minutes. The distribution of over- and under-estimates was symmetrical, but estimated times tended to be longer than reported times with increasing distance from hospital. Almost all respondents rounded their travel time to the nearest five or ten minutes. The reason for many cases of reported journey times exceeding the estimated times was confirmed by respondents' comments as traffic congestion. GIS estimates of car travel times were moderately close approximations to reported times. GIS travel time estimates may be superior to reported travel times for modelling purposes because reported times contain errors and can reflect unusual circumstances. Comparison with reported times did not suggest that estimated times were a more sensitive measure than straight line distance.

  12. MRAS state estimator for speed sensorless ISFOC induction motor drives with Luenberger load torque estimation.

    PubMed

    Zorgani, Youssef Agrebi; Koubaa, Yassine; Boussak, Mohamed

    2016-03-01

    This paper presents a novel method for estimating the load torque of a sensorless indirect stator flux oriented controlled (ISFOC) induction motor drive based on the model reference adaptive system (MRAS) scheme. As a matter of fact, this method is meant to inter-connect a speed estimator with the load torque observer. For this purpose, a MRAS has been applied to estimate the rotor speed with tuned load torque in order to obtain a high performance ISFOC induction motor drive. The reference and adjustable models, developed in the stationary stator reference frame, are used in the MRAS scheme in an attempt to estimate the speed of the measured terminal voltages and currents. The load torque is estimated by means of a Luenberger observer defined throughout the mechanical equation. Every observer state matrix depends on the mechanical characteristics of the machine taking into account the vicious friction coefficient and inertia moment. Accordingly, some simulation results are presented to validate the proposed method and to highlight the influence of the variation of the inertia moment and the friction coefficient on the speed and the estimated load torque. The experimental results, concerning to the sensorless speed with a load torque estimation, are elaborated in order to validate the effectiveness of the proposed method. The complete sensorless ISFOC with load torque estimation is successfully implemented in real time using a digital signal processor board DSpace DS1104 for a laboratory 3 kW induction motor. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Using groundwater levels to estimate recharge

    USGS Publications Warehouse

    Healy, R.W.; Cook, P.G.

    2002-01-01

    Accurate estimation of groundwater recharge is extremely important for proper management of groundwater systems. Many different approaches exist for estimating recharge. This paper presents a review of methods that are based on groundwater-level data. The water-table fluctuation method may be the most widely used technique for estimating recharge; it requires knowledge of specific yield and changes in water levels over time. Advantages of this approach include its simplicity and an insensitivity to the mechanism by which water moves through the unsaturated zone. Uncertainty in estimates generated by this method relate to the limited accuracy with which specific yield can be determined and to the extent to which assumptions inherent in the method are valid. Other methods that use water levels (mostly based on the Darcy equation) are also described. The theory underlying the methods is explained. Examples from the literature are used to illustrate applications of the different methods.

  14. Global Validation of MODIS Atmospheric Profile-Derived Near-Surface Air Temperature and Dew Point Estimates

    NASA Astrophysics Data System (ADS)

    Famiglietti, C.; Fisher, J.; Halverson, G. H.

    2017-12-01

    This study validates a method of remote sensing near-surface meteorology that vertically interpolates MODIS atmospheric profiles to surface pressure level. The extraction of air temperature and dew point observations at a two-meter reference height from 2001 to 2014 yields global moderate- to fine-resolution near-surface temperature distributions that are compared to geographically and temporally corresponding measurements from 114 ground meteorological stations distributed worldwide. This analysis is the first robust, large-scale validation of the MODIS-derived near-surface air temperature and dew point estimates, both of which serve as key inputs in models of energy, water, and carbon exchange between the land surface and the atmosphere. Results show strong linear correlations between remotely sensed and in-situ near-surface air temperature measurements (R2 = 0.89), as well as between dew point observations (R2 = 0.77). Performance is relatively uniform across climate zones. The extension of mean climate-wise percent errors to the entire remote sensing dataset allows for the determination of MODIS air temperature and dew point uncertainties on a global scale.

  15. Reaction time as an indicator of insufficient effort: Development and validation of an embedded performance validity parameter.

    PubMed

    Stevens, Andreas; Bahlo, Simone; Licha, Christina; Liske, Benjamin; Vossler-Thies, Elisabeth

    2016-11-30

    Subnormal performance in attention tasks may result from various sources including lack of effort. In this report, the derivation and validation of a performance validity parameter for reaction time is described, using a set of malingering-indices ("Slick-criteria"), and 3 independent samples of participants (total n =893). The Slick-criteria yield an estimate of the probability of malingering based on the presence of an external incentive, evidence from neuropsychological testing, from self-report and clinical data. In study (1) a validity parameter is derived using reaction time data of a sample, composed of inpatients with recent severe brain lesions not involved in litigation and of litigants with and without brain lesion. In study (2) the validity parameter is tested in an independent sample of litigants. In study (3) the parameter is applied to an independent sample comprising cooperative and non-cooperative testees. Logistic regression analysis led to a derived validity parameter based on median reaction time and standard deviation. It performed satisfactorily in studies (2) and (3) (study 2 sensitivity=0.94, specificity=1.00; study 3 sensitivity=0.79, specificity=0.87). The findings suggest that median reaction time and standard deviation may be used as indicators of negative response bias. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Clinical Validity of the ADI-R in a US-Based Latino Population

    ERIC Educational Resources Information Center

    Vanegas, Sandra B.; Magaña, Sandra; Morales, Miguel; McNamara, Ellyn

    2016-01-01

    The Autism Diagnostic Interview-Revised (ADI-R) has been validated as a tool to aid in the diagnosis of Autism; however, given the growing diversity in the United States, the ADI-R must be validated for different languages and cultures. This study evaluates the validity of the ADI-R in a US-based Latino, Spanish-speaking population of 50 children…

  17. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    USGS Publications Warehouse

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  18. Sensitivity of Teacher Value-Added Estimates to Student and Peer Control Variables

    ERIC Educational Resources Information Center

    Johnson, Matthew T.; Lipscomb, Stephen; Gill, Brian

    2015-01-01

    Teacher value-added models (VAMs) must isolate teachers' contributions to student achievement to be valid. Well-known VAMs use different specifications, however, leaving policymakers with little clear guidance for constructing a valid model. We examine the sensitivity of teacher value-added estimates under different models based on whether they…

  19. Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2009-01-01

    The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…

  20. Validation of limited sampling strategy for the estimation of mycophenolic acid exposure in Chinese adult liver transplant recipients.

    PubMed

    Hao, Chen; Erzheng, Chen; Anwei, Mao; Zhicheng, Yu; Baiyong, Shen; Xiaxing, Deng; Weixia, Zhang; Chenghong, Peng; Hongwei, Li

    2007-12-01

    Mycophenolate mofetil (MMF) is indicated as immunosuppressive therapy in liver transplantation. The abbreviated models for the estimation of mycophenolic acid (MPA) area under the concentration-time curve (AUC) have been established by limited sampling strategies (LSSs) in adult liver transplant recipients. In the current study, the performance of the abbreviated models to predict MPA exposure was validated in an independent group of patients. A total of 30 MPA pharmacokinetic profiles from 30 liver transplant recipients receiving MMF in combination with tacrolimus were used to compare 8 models' performance with a full 10 time-point MPA-AUC. Linear regression analysis and Bland-Altman analysis were used to compare the estimated MPA-AUC0-12h from each model against the measured MPA-AUC0-12h. A wide range of agreement was shown when estimated MPA-AUC0-12h was compared with measured MPA-AUC0-12h, and the range of coefficient of determination (r2) was from 0.479 to 0.936. The model based on MPA pharmacokinetic parameters C1h, C2h, C6h, and C8h had the best ability to predict measured MPA-AUC0-12h, with the best coefficient of determination (r2=0.936), the excellent prediction bias (2.18%), the best prediction precision (5.11%), and the best prediction variation (2SD=+/-7.88 mg.h/L). However, the model based on MPA pharmacokinetic sampling time points C1h, C2h, and C4h was more suitable when concerned with clinical convenience, which had shorter sampling interval, an excellent coefficient of determination (r2=0.795), an excellent prediction bias (3.48%), an acceptable prediction precision (14.37%), and a good prediction variation (2SD=+/-13.23 mg.h/L). Measured MPA-AUC0-12h could be best predicted by using MPA pharmacokinetic parameters C1h, C2h, C6h, and C8h. The model based on MPA pharmacokinetic parameters C1h, C2h, and C4h was more feasible in clinical application. Copyright (c) 2007 AASLD.