Sample records for validation results based

  1. MRI-based modeling for radiocarpal joint mechanics: validation criteria and results for four specimen-specific models.

    PubMed

    Fischer, Kenneth J; Johnson, Joshua E; Waller, Alexander J; McIff, Terence E; Toby, E Bruce; Bilgen, Mehmet

    2011-10-01

    The objective of this study was to validate the MRI-based joint contact modeling methodology in the radiocarpal joints by comparison of model results with invasive specimen-specific radiocarpal contact measurements from four cadaver experiments. We used a single validation criterion for multiple outcome measures to characterize the utility and overall validity of the modeling approach. For each experiment, a Pressurex film and a Tekscan sensor were sequentially placed into the radiocarpal joints during simulated grasp. Computer models were constructed based on MRI visualization of the cadaver specimens without load. Images were also acquired during the loaded configuration used with the direct experimental measurements. Geometric surface models of the radius, scaphoid and lunate (including cartilage) were constructed from the images acquired without the load. The carpal bone motions from the unloaded state to the loaded state were determined using a series of 3D image registrations. Cartilage thickness was assumed uniform at 1.0 mm with an effective compressive modulus of 4 MPa. Validation was based on experimental versus model contact area, contact force, average contact pressure and peak contact pressure for the radioscaphoid and radiolunate articulations. Contact area was also measured directly from images acquired under load and compared to the experimental and model data. Qualitatively, there was good correspondence between the MRI-based model data and experimental data, with consistent relative size, shape and location of radioscaphoid and radiolunate contact regions. Quantitative data from the model generally compared well with the experimental data for all specimens. Contact area from the MRI-based model was very similar to the contact area measured directly from the images. For all outcome measures except average and peak pressures, at least two specimen models met the validation criteria with respect to experimental measurements for both articulations

  2. Hopes and Cautions for Instrument-Based Evaluation of Consent Capacity: Results of a Construct Validity Study of Three Instruments

    PubMed Central

    Moye, Jennifer; Azar, Annin R.; Karel, Michele J.; Gurrera, Ronald J.

    2016-01-01

    Does instrument based evaluation of consent capacity increase the precision and validity of competency assessment or does ostensible precision provide a false sense of confidence without in fact improving validity? In this paper we critically examine the evidence for construct validity of three instruments for measuring four functional abilities important in consent capacity: understanding, appreciation, reasoning, and expressing a choice. Instrument based assessment of these abilities is compared through investigation of a multi-trait multi-method matrix in 88 older adults with mild to moderate dementia. Results find variable support for validity. There appears to be strong evidence for good hetero-method validity for the measurement of understanding, mixed evidence for validity in the measurement of reasoning, and strong evidence for poor hetero-method validity for the concepts of appreciation and expressing a choice, although the latter is likely due to extreme range restrictions. The development of empirically based tools for use in capacity evaluation should ultimately enhance the reliability and validity of assessment, yet clearly more research is needed to define and measure the constructs of decisional capacity. We would also emphasize that instrument based assessment of capacity is only one part of a comprehensive evaluation of competency which includes consideration of diagnosis, psychiatric and/or cognitive symptomatology, risk involved in the situation, and individual and cultural differences. PMID:27330455

  3. Validation of electronic medical record-based phenotyping algorithms: results and lessons learned from the eMERGE network.

    PubMed

    Newton, Katherine M; Peissig, Peggy L; Kho, Abel Ngo; Bielinski, Suzette J; Berg, Richard L; Choudhary, Vidhu; Basford, Melissa; Chute, Christopher G; Kullo, Iftikhar J; Li, Rongling; Pacheco, Jennifer A; Rasmussen, Luke V; Spangler, Leslie; Denny, Joshua C

    2013-06-01

    Genetic studies require precise phenotype definitions, but electronic medical record (EMR) phenotype data are recorded inconsistently and in a variety of formats. To present lessons learned about validation of EMR-based phenotypes from the Electronic Medical Records and Genomics (eMERGE) studies. The eMERGE network created and validated 13 EMR-derived phenotype algorithms. Network sites are Group Health, Marshfield Clinic, Mayo Clinic, Northwestern University, and Vanderbilt University. By validating EMR-derived phenotypes we learned that: (1) multisite validation improves phenotype algorithm accuracy; (2) targets for validation should be carefully considered and defined; (3) specifying time frames for review of variables eases validation time and improves accuracy; (4) using repeated measures requires defining the relevant time period and specifying the most meaningful value to be studied; (5) patient movement in and out of the health plan (transience) can result in incomplete or fragmented data; (6) the review scope should be defined carefully; (7) particular care is required in combining EMR and research data; (8) medication data can be assessed using claims, medications dispensed, or medications prescribed; (9) algorithm development and validation work best as an iterative process; and (10) validation by content experts or structured chart review can provide accurate results. Despite the diverse structure of the five EMRs of the eMERGE sites, we developed, validated, and successfully deployed 13 electronic phenotype algorithms. Validation is a worthwhile process that not only measures phenotype performance but also strengthens phenotype algorithm definitions and enhances their inter-institutional sharing.

  4. Toward DNA-based facial composites: preliminary results and validation.

    PubMed

    Claes, Peter; Hill, Harold; Shriver, Mark D

    2014-11-01

    The potential of constructing useful DNA-based facial composites is forensically of great interest. Given the significant identity information coded in the human face these predictions could help investigations out of an impasse. Although, there is substantial evidence that much of the total variation in facial features is genetically mediated, the discovery of which genes and gene variants underlie normal facial variation has been hampered primarily by the multipartite nature of facial variation. Traditionally, such physical complexity is simplified by simple scalar measurements defined a priori, such as nose or mouth width or alternatively using dimensionality reduction techniques such as principal component analysis where each principal coordinate is then treated as a scalar trait. However, as shown in previous and related work, a more impartial and systematic approach to modeling facial morphology is available and can facilitate both the gene discovery steps, as we recently showed, and DNA-based facial composite construction, as we show here. We first use genomic ancestry and sex to create a base-face, which is simply an average sex and ancestry matched face. Subsequently, the effects of 24 individual SNPs that have been shown to have significant effects on facial variation are overlaid on the base-face forming the predicted-face in a process akin to a photomontage or image blending. We next evaluate the accuracy of predicted faces using cross-validation. Physical accuracy of the facial predictions either locally in particular parts of the face or in terms of overall similarity is mainly determined by sex and genomic ancestry. The SNP-effects maintain the physical accuracy while significantly increasing the distinctiveness of the facial predictions, which would be expected to reduce false positives in perceptual identification tasks. To the best of our knowledge this is the first effort at generating facial composites from DNA and the results are preliminary

  5. Results from SMAP Validation Experiments 2015 and 2016

    NASA Astrophysics Data System (ADS)

    Colliander, A.; Jackson, T. J.; Cosh, M. H.; Misra, S.; Crow, W.; Powers, J.; Wood, E. F.; Mohanty, B.; Judge, J.; Drewry, D.; McNairn, H.; Bullock, P.; Berg, A. A.; Magagi, R.; O'Neill, P. E.; Yueh, S. H.

    2017-12-01

    NASA's Soil Moisture Active Passive (SMAP) mission was launched in January 2015. The objective of the mission is global mapping of soil moisture and freeze/thaw state. Well-characterized sites with calibrated in situ soil moisture measurements are used to determine the quality of the soil moisture data products; these sites are designated as core validation sites (CVS). To support the CVS-based validation, airborne field experiments are used to provide high-fidelity validation data and to improve the SMAP retrieval algorithms. The SMAP project and NASA coordinated airborne field experiments at three CVS locations in 2015 and 2016. SMAP Validation Experiment 2015 (SMAPVEX15) was conducted around the Walnut Gulch CVS in Arizona in August, 2015. SMAPVEX16 was conducted at the South Fork CVS in Iowa and Carman CVS in Manitoba, Canada from May to August 2016. The airborne PALS (Passive Active L-band Sensor) instrument mapped all experiment areas several times resulting in 30 coincidental measurements with SMAP. The experiments included intensive ground sampling regime consisting of manual sampling and augmentation of the CVS soil moisture measurements with temporary networks of soil moisture sensors. Analyses using the data from these experiments have produced various results regarding the SMAP validation and related science questions. The SMAPVEX15 data set has been used for calibration of a hyper-resolution model for soil moisture product validation; development of a multi-scale parameterization approach for surface roughness, and validation of disaggregation of SMAP soil moisture with optical thermal signal. The SMAPVEX16 data set has been already used for studying the spatial upscaling within a pixel with highly heterogeneous soil texture distribution; for understanding the process of radiative transfer at plot scale in relation to field scale and SMAP footprint scale over highly heterogeneous vegetation distribution; for testing a data fusion based soil moisture

  6. Validation of a scenario-based assessment of critical thinking using an externally validated tool.

    PubMed

    Buur, Jennifer L; Schmidt, Peggy; Smylie, Dean; Irizarry, Kris; Crocker, Carlos; Tyler, John; Barr, Margaret

    2012-01-01

    With medical education transitioning from knowledge-based curricula to competency-based curricula, critical thinking skills have emerged as a major competency. While there are validated external instruments for assessing critical thinking, many educators have created their own custom assessments of critical thinking. However, the face validity of these assessments has not been challenged. The purpose of this study was to compare results from a custom assessment of critical thinking with the results from a validated external instrument of critical thinking. Students from the College of Veterinary Medicine at Western University of Health Sciences were administered a custom assessment of critical thinking (ACT) examination and the externally validated instrument, California Critical Thinking Skills Test (CCTST), in the spring of 2011. Total scores and sub-scores from each exam were analyzed for significant correlations using Pearson correlation coefficients. Significant correlations between ACT Blooms 2 and deductive reasoning and total ACT score and deductive reasoning were demonstrated with correlation coefficients of 0.24 and 0.22, respectively. No other statistically significant correlations were found. The lack of significant correlation between the two examinations illustrates the need in medical education to externally validate internal custom assessments. Ultimately, the development and validation of custom assessments of non-knowledge-based competencies will produce higher quality medical professionals.

  7. Base Flow Model Validation

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John

    2011-01-01

    A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.

  8. Results and Validation of MODIS Aerosol Retrievals Over Land and Ocean

    NASA Technical Reports Server (NTRS)

    Remer, Lorraine; Einaudi, Franco (Technical Monitor)

    2001-01-01

    The MODerate Resolution Imaging Spectroradiometer (MODIS) instrument aboard the Terra spacecraft has been retrieving aerosol parameters since late February 2000. Initial qualitative checking of the products showed very promising results including matching of land and ocean retrievals at coastlines. Using AERONET ground-based radiometers as our primary validation tool, we have established quantitative validation as well. Our results show that for most aerosol types, the MODIS products fall within the pre-launch estimated uncertainties. Surface reflectance and aerosol model assumptions appear to be sufficiently accurate for the optical thickness retrieval. Dust provides a possible exception, which may be due to non-spherical effects. Over ocean the MODIS products include information on particle size, and these parameters are also validated with AERONET retrievals.

  9. Results and Validation of MODIS Aerosol Retrievals over Land and Ocean

    NASA Technical Reports Server (NTRS)

    Remer, L. A.; Kaufman, Y. J.; Tanre, D.; Ichoku, C.; Chu, D. A.; Mattoo, S.; Levy, R.; Martins, J. V.; Li, R.-R.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The MODerate Resolution Imaging Spectroradiometer (MODIS) instrument aboard the Terra spacecraft has been retrieving aerosol parameters since late February 2000. Initial qualitative checking of the products showed very promising results including matching of land and ocean retrievals at coastlines. Using AERONET ground-based radiometers as our primary validation tool, we have established quantitative validation as well. Our results show that for most aerosol types, the MODIS products fall within the pre-launch estimated uncertainties. Surface reflectance and aerosol model assumptions appear to be sufficiently accurate for the optical thickness retrieval. Dust provides a possible exception, which may be due to non-spherical effects. Over ocean the MODIS products include information on particle size, and these parameters are also validated with AERONET retrievals.

  10. Sensitivity-Uncertainty Based Nuclear Criticality Safety Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-09-20

    These are slides from a seminar given to the University of Mexico Nuclear Engineering Department. Whisper is a statistical analysis package developed to support nuclear criticality safety validation. It uses the sensitivity profile data for an application as computed by MCNP6 along with covariance files for the nuclear data to determine a baseline upper-subcritical-limit for the application. Whisper and its associated benchmark files are developed and maintained as part of MCNP6, and will be distributed with all future releases of MCNP6. Although sensitivity-uncertainty methods for NCS validation have been under development for 20 years, continuous-energy Monte Carlo codes such asmore » MCNP could not determine the required adjoint-weighted tallies for sensitivity profiles. The recent introduction of the iterated fission probability method into MCNP led to the rapid development of sensitivity analysis capabilities for MCNP6 and the development of Whisper. Sensitivity-uncertainty based methods represent the future for NCS validation – making full use of today’s computer power to codify past approaches based largely on expert judgment. Validation results are defensible, auditable, and repeatable as needed with different assumptions and process models. The new methods can supplement, support, and extend traditional validation approaches.« less

  11. OWL-based reasoning methods for validating archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Validation of model-based deformation correction in image-guided liver surgery via tracked intraoperative ultrasound: preliminary method and results

    NASA Astrophysics Data System (ADS)

    Clements, Logan W.; Collins, Jarrod A.; Wu, Yifei; Simpson, Amber L.; Jarnagin, William R.; Miga, Michael I.

    2015-03-01

    Soft tissue deformation represents a significant error source in current surgical navigation systems used for open hepatic procedures. While numerous algorithms have been proposed to rectify the tissue deformation that is encountered during open liver surgery, clinical validation of the proposed methods has been limited to surface based metrics and sub-surface validation has largely been performed via phantom experiments. Tracked intraoperative ultrasound (iUS) provides a means to digitize sub-surface anatomical landmarks during clinical procedures. The proposed method involves the validation of a deformation correction algorithm for open hepatic image-guided surgery systems via sub-surface targets digitized with tracked iUS. Intraoperative surface digitizations were acquired via a laser range scanner and an optically tracked stylus for the purposes of computing the physical-to-image space registration within the guidance system and for use in retrospective deformation correction. Upon completion of surface digitization, the organ was interrogated with a tracked iUS transducer where the iUS images and corresponding tracked locations were recorded. After the procedure, the clinician reviewed the iUS images to delineate contours of anatomical target features for use in the validation procedure. Mean closest point distances between the feature contours delineated in the iUS images and corresponding 3-D anatomical model generated from the preoperative tomograms were computed to quantify the extent to which the deformation correction algorithm improved registration accuracy. The preliminary results for two patients indicate that the deformation correction method resulted in a reduction in target error of approximately 50%.

  13. Validation of model-based brain shift correction in neurosurgery via intraoperative magnetic resonance imaging: preliminary results

    NASA Astrophysics Data System (ADS)

    Luo, Ma; Frisken, Sarah F.; Weis, Jared A.; Clements, Logan W.; Unadkat, Prashin; Thompson, Reid C.; Golby, Alexandra J.; Miga, Michael I.

    2017-03-01

    The quality of brain tumor resection surgery is dependent on the spatial agreement between preoperative image and intraoperative anatomy. However, brain shift compromises the aforementioned alignment. Currently, the clinical standard to monitor brain shift is intraoperative magnetic resonance (iMR). While iMR provides better understanding of brain shift, its cost and encumbrance is a consideration for medical centers. Hence, we are developing a model-based method that can be a complementary technology to address brain shift in standard resections, with resource-intensive cases as referrals for iMR facilities. Our strategy constructs a deformation `atlas' containing potential deformation solutions derived from a biomechanical model that account for variables such as cerebrospinal fluid drainage and mannitol effects. Volumetric deformation is estimated with an inverse approach that determines the optimal combinatory `atlas' solution fit to best match measured surface deformation. Accordingly, preoperative image is updated based on the computed deformation field. This study is the latest development to validate our methodology with iMR. Briefly, preoperative and intraoperative MR images of 2 patients were acquired. Homologous surface points were selected on preoperative and intraoperative scans as measurement of surface deformation and used to drive the inverse problem. To assess the model accuracy, subsurface shift of targets between preoperative and intraoperative states was measured and compared to model prediction. Considering subsurface shift above 3 mm, the proposed strategy provides an average shift correction of 59% across 2 cases. While further improvements in both the model and ability to validate with iMR are desired, the results reported are encouraging.

  14. MotiveValidator: interactive web-based validation of ligand and residue structure in biomolecular complexes.

    PubMed

    Vařeková, Radka Svobodová; Jaiswal, Deepti; Sehnal, David; Ionescu, Crina-Maria; Geidl, Stanislav; Pravda, Lukáš; Horský, Vladimír; Wimmerová, Michaela; Koča, Jaroslav

    2014-07-01

    Structure validation has become a major issue in the structural biology community, and an essential step is checking the ligand structure. This paper introduces MotiveValidator, a web-based application for the validation of ligands and residues in PDB or PDBx/mmCIF format files provided by the user. Specifically, MotiveValidator is able to evaluate in a straightforward manner whether the ligand or residue being studied has a correct annotation (3-letter code), i.e. if it has the same topology and stereochemistry as the model ligand or residue with this annotation. If not, MotiveValidator explicitly describes the differences. MotiveValidator offers a user-friendly, interactive and platform-independent environment for validating structures obtained by any type of experiment. The results of the validation are presented in both tabular and graphical form, facilitating their interpretation. MotiveValidator can process thousands of ligands or residues in a single validation run that takes no more than a few minutes. MotiveValidator can be used for testing single structures, or the analysis of large sets of ligands or fragments prepared for binding site analysis, docking or virtual screening. MotiveValidator is freely available via the Internet at http://ncbr.muni.cz/MotiveValidator. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Validity evidence based on test content.

    PubMed

    Sireci, Stephen; Faulkner-Bond, Molly

    2014-01-01

    Validity evidence based on test content is one of the five forms of validity evidence stipulated in the Standards for Educational and Psychological Testing developed by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. In this paper, we describe the logic and theory underlying such evidence and describe traditional and modern methods for gathering and analyzing content validity data. A comprehensive review of the literature and of the aforementioned Standards is presented. For educational tests and other assessments targeting knowledge and skill possessed by examinees, validity evidence based on test content is necessary for building a validity argument to support the use of a test for a particular purpose. By following the methods described in this article, practitioners have a wide arsenal of tools available for determining how well the content of an assessment is congruent with and appropriate for the specific testing purposes.

  16. Validation of spatial variability in downscaling results from the VALUE perfect predictor experiment

    NASA Astrophysics Data System (ADS)

    Widmann, Martin; Bedia, Joaquin; Gutiérrez, Jose Manuel; Maraun, Douglas; Huth, Radan; Fischer, Andreas; Keller, Denise; Hertig, Elke; Vrac, Mathieu; Wibig, Joanna; Pagé, Christian; Cardoso, Rita M.; Soares, Pedro MM; Bosshard, Thomas; Casado, Maria Jesus; Ramos, Petra

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. Within VALUE a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods has been developed. In the first validation experiment the downscaling methods are validated in a setup with perfect predictors taken from the ERA-interim reanalysis for the period 1997 - 2008. This allows to investigate the isolated skill of downscaling methods without further error contributions from the large-scale predictors. One aspect of the validation is the representation of spatial variability. As part of the VALUE validation we have compared various properties of the spatial variability of downscaled daily temperature and precipitation with the corresponding properties in observations. We have used two test validation datasets, one European-wide set of 86 stations, and one higher-density network of 50 stations in Germany. Here we present results based on three approaches, namely the analysis of i.) correlation matrices, ii.) pairwise joint threshold exceedances, and iii.) regions of similar variability. We summarise the information contained in correlation matrices by calculating the dependence of the correlations on distance and deriving decorrelation lengths, as well as by determining the independent degrees of freedom. Probabilities for joint threshold exceedances and (where appropriate) non-exceedances are calculated for various user-relevant thresholds related for instance to extreme precipitation or frost and heat days. The dependence of these probabilities on distance is again characterised by calculating typical length scales that separate dependent from independent exceedances. Regionalisation is based on rotated Principal Component Analysis. The results indicate which downscaling methods are preferable if the dependency of variability at different locations is relevant for the user.

  17. Temporal validation for landsat-based volume estimation model

    Treesearch

    Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan

    2015-01-01

    Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...

  18. JPL/USC GAIM: Validating COSMIC and Ground-Based GPS Assimilation Results to Estimate Ionospheric Electron Densities

    NASA Astrophysics Data System (ADS)

    Komjathy, A.; Wilson, B.; Akopian, V.; Pi, X.; Mannucci, A.; Wang, C.

    2008-12-01

    tracing applications and trans-ionospheric path delay calibration. In the presentation, we will discuss the expected impact of NRT COSMIC occultation and NRT ground-based measurements and present validation results for ingest of COSMIC data into GAIM using measurements from World Days. We will quality check our COSMIC-derived products by comparing Abel profiles and JPL- processed results. Furthermore, we will validate GAIM assimilation results using Incoherent Scatter Radar measurements from Arecibo, Jicamarca and Millstone Hill datasets. We will conclude by characterizing the improved electron density states using dual-frequency altimeter-derived Jason vertical TEC measurements.

  19. Using Ground-Based Measurements and Retrievals to Validate Satellite Data

    NASA Technical Reports Server (NTRS)

    Dong, Xiquan

    2002-01-01

    The proposed research is to use the DOE ARM ground-based measurements and retrievals as the ground-truth references for validating satellite cloud results and retrieving algorithms. This validation effort includes four different ways: (1) cloud properties on different satellites, therefore different sensors, TRMM VIRS and TERRA MODIS; (2) cloud properties at different climatic regions, such as DOE ARM SGP, NSA, and TWP sites; (3) different cloud types, low and high level cloud properties; and (4) day and night retrieving algorithms. Validation of satellite-retrieved cloud properties is very difficult and a long-term effort because of significant spatial and temporal differences between the surface and satellite observing platforms. The ground-based measurements and retrievals, only carefully analyzed and validated, can provide a baseline for estimating errors in the satellite products. Even though the validation effort is so difficult, a significant progress has been made during the proposed study period, and the major accomplishments are summarized in the follow.

  20. Validation environment for AIPS/ALS: Implementation and results

    NASA Technical Reports Server (NTRS)

    Segall, Zary; Siewiorek, Daniel; Caplan, Eddie; Chung, Alan; Czeck, Edward; Vrsalovic, Dalibor

    1990-01-01

    The work is presented which was performed in porting the Fault Injection-based Automated Testing (FIAT) and Programming and Instrumentation Environments (PIE) validation tools, to the Advanced Information Processing System (AIPS) in the context of the Ada Language System (ALS) application, as well as an initial fault free validation of the available AIPS system. The PIE components implemented on AIPS provide the monitoring mechanisms required for validation. These mechanisms represent a substantial portion of the FIAT system. Moreover, these are required for the implementation of the FIAT environment on AIPS. Using these components, an initial fault free validation of the AIPS system was performed. The implementation is described of the FIAT/PIE system, configured for fault free validation of the AIPS fault tolerant computer system. The PIE components were modified to support the Ada language. A special purpose AIPS/Ada runtime monitoring and data collection was implemented. A number of initial Ada programs running on the PIE/AIPS system were implemented. The instrumentation of the Ada programs was accomplished automatically inside the PIE programming environment. PIE's on-line graphical views show vividly and accurately the performance characteristics of Ada programs, AIPS kernel and the application's interaction with the AIPS kernel. The data collection mechanisms were written in a high level language, Ada, and provide a high degree of flexibility for implementation under various system conditions.

  1. Situating Standard Setting within Argument-Based Validity

    ERIC Educational Resources Information Center

    Papageorgiou, Spiros; Tannenbaum, Richard J.

    2016-01-01

    Although there has been substantial work on argument-based approaches to validation as well as standard-setting methodologies, it might not always be clear how standard setting fits into argument-based validity. The purpose of this article is to address this lack in the literature, with a specific focus on topics related to argument-based…

  2. 42 CFR 476.84 - Changes as a result of DRG validation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Changes as a result of DRG validation. 476.84... DRG validation. A provider or practitioner may obtain a review by a QIO under part 473 of this chapter... result of QIO validation activities. ...

  3. ExEP yield modeling tool and validation test results

    NASA Astrophysics Data System (ADS)

    Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul

    2017-09-01

    EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.

  4. Fuzzy-logic based strategy for validation of multiplex methods: example with qualitative GMO assays.

    PubMed

    Bellocchi, Gianni; Bertholet, Vincent; Hamels, Sandrine; Moens, W; Remacle, José; Van den Eede, Guy

    2010-02-01

    This paper illustrates the advantages that a fuzzy-based aggregation method could bring into the validation of a multiplex method for GMO detection (DualChip GMO kit, Eppendorf). Guidelines for validation of chemical, bio-chemical, pharmaceutical and genetic methods have been developed and ad hoc validation statistics are available and routinely used, for in-house and inter-laboratory testing, and decision-making. Fuzzy logic allows summarising the information obtained by independent validation statistics into one synthetic indicator of overall method performance. The microarray technology, introduced for simultaneous identification of multiple GMOs, poses specific validation issues (patterns of performance for a variety of GMOs at different concentrations). A fuzzy-based indicator for overall evaluation is illustrated in this paper, and applied to validation data for different genetically modified elements. Remarks were drawn on the analytical results. The fuzzy-logic based rules were shown to be applicable to improve interpretation of results and facilitate overall evaluation of the multiplex method.

  5. Validation of POLDER/ADEOS data using a ground-based lidar network: Preliminary results for semi-transparent and cirrus clouds

    NASA Technical Reports Server (NTRS)

    Chepfer, H.; Sauvage, L.; Flamant, P. H.; Pelon, J.; Goloub, P.; Brogniez, G.; spinhirne, J.; Lavorato, M.; Sugimoto, N.

    1998-01-01

    At mid and tropical latitudes, cirrus clouds are present more than 50% of the time in satellites observations. Due to their large spatial and temporal coverage, and associated low temperatures, cirrus clouds have a major influence on the Earth-Ocean-Atmosphere energy balance through their effects on the incoming solar radiation and outgoing infrared radiation. At present the impact of cirrus clouds on climate is well recognized but remains to be asserted more precisely, for their optical and radiative properties are not very well known. In order to understand the effects of cirrus clouds on climate, their optical and radiative characteristics of these clouds need to be determined accurately at different scales in different locations i.e. latitude. Lidars are well suited to observe cirrus clouds, they can detect very thin and semi-transparent layers, and retrieve the clouds geometrical properties i.e. altitude and multilayers, as well as radiative properties i.e. optical depth, backscattering phase functions of ice crystals. Moreover the linear depolarization ratio can give information on the ice crystal shape. In addition, the data collected with an airborne version of POLDER (POLarization and Directionality of Earth Reflectances) instrument have shown that bidirectional polarized measurements can provide information on cirrus cloud microphysical properties (crystal shapes, preferred orientation in space). The spaceborne version of POLDER-1 has been flown on ADEOS-1 platform during 8 months (October 96 - June 97), and the next POLDER-2 instrument will be launched in 2000 on ADEOS-2. The POLDER-1 cloud inversion algorithms are currently under validation. For cirrus clouds, a validation based on comparisons between cloud properties retrieved from POLDER-1 data and cloud properties inferred from a ground-based lidar network is currently under consideration. We present the first results of the validation.

  6. Knowledge-based system verification and validation

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1990-01-01

    The objective of this task is to develop and evaluate a methodology for verification and validation (V&V) of knowledge-based systems (KBS) for space station applications with high reliability requirements. The approach consists of three interrelated tasks. The first task is to evaluate the effectiveness of various validation methods for space station applications. The second task is to recommend requirements for KBS V&V for Space Station Freedom (SSF). The third task is to recommend modifications to the SSF to support the development of KBS using effectiveness software engineering and validation techniques. To accomplish the first task, three complementary techniques will be evaluated: (1) Sensitivity Analysis (Worchester Polytechnic Institute); (2) Formal Verification of Safety Properties (SRI International); and (3) Consistency and Completeness Checking (Lockheed AI Center). During FY89 and FY90, each contractor will independently demonstrate the user of his technique on the fault detection, isolation, and reconfiguration (FDIR) KBS or the manned maneuvering unit (MMU), a rule-based system implemented in LISP. During FY91, the application of each of the techniques to other knowledge representations and KBS architectures will be addressed. After evaluation of the results of the first task and examination of Space Station Freedom V&V requirements for conventional software, a comprehensive KBS V&V methodology will be developed and documented. Development of highly reliable KBS's cannot be accomplished without effective software engineering methods. Using the results of current in-house research to develop and assess software engineering methods for KBS's as well as assessment of techniques being developed elsewhere, an effective software engineering methodology for space station KBS's will be developed, and modification of the SSF to support these tools and methods will be addressed.

  7. Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.

    2012-12-01

    The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some

  8. Validity of Students Worksheet Based Problem-Based Learning for 9th Grade Junior High School in living organism Inheritance and Food Biotechnology.

    NASA Astrophysics Data System (ADS)

    Jefriadi, J.; Ahda, Y.; Sumarmin, R.

    2018-04-01

    Based on preliminary research of students worksheet used by teachers has several disadvantages such as students worksheet arranged directly drove learners conduct an investigation without preceded by directing learners to a problem or provide stimulation, student's worksheet not provide a concrete imageand presentation activities on the students worksheet not refer to any one learning models curicullum recommended. To address problems Reviews these students then developed a worksheet based on problem-based learning. This is a research development that using Ploom models. The phases are preliminary research, development and assessment. The instruments used in data collection that includes pieces of observation/interviews, instrument self-evaluation, instruments validity. The results of the validation expert on student worksheets get a valid result the average value 80,1%. Validity of students worksheet based problem-based learning for 9th grade junior high school in living organism inheritance and food biotechnology get valid category.

  9. Validation results of satellite mock-up capturing experiment using nets

    NASA Astrophysics Data System (ADS)

    Medina, Alberto; Cercós, Lorenzo; Stefanescu, Raluca M.; Benvenuto, Riccardo; Pesce, Vincenzo; Marcon, Marco; Lavagna, Michèle; González, Iván; Rodríguez López, Nuria; Wormnes, Kjetil

    2017-05-01

    The PATENDER activity (Net parametric characterization and parabolic flight), funded by the European Space Agency (ESA) via its Clean Space initiative, was aiming to validate a simulation tool for designing nets for capturing space debris. This validation has been performed through a set of different experiments under microgravity conditions where a net was launched capturing and wrapping a satellite mock-up. This paper presents the architecture of the thrown-net dynamics simulator together with the set-up of the deployment experiment and its trajectory reconstruction results on a parabolic flight (Novespace A-310, June 2015). The simulator has been implemented within the Blender framework in order to provide a highly configurable tool, able to reproduce different scenarios for Active Debris Removal missions. The experiment has been performed over thirty parabolas offering around 22 s of zero-g conditions. Flexible meshed fabric structure (the net) ejected from a container and propelled by corner masses (the bullets) arranged around its circumference have been launched at different initial velocities and launching angles using a pneumatic-based dedicated mechanism (representing the chaser satellite) against a target mock-up (the target satellite). High-speed motion cameras were recording the experiment allowing 3D reconstruction of the net motion. The net knots have been coloured to allow the images post-process using colour segmentation, stereo matching and iterative closest point (ICP) for knots tracking. The final objective of the activity was the validation of the net deployment and wrapping simulator using images recorded during the parabolic flight. The high-resolution images acquired have been post-processed to determine accurately the initial conditions and generate the reference data (position and velocity of all knots of the net along its deployment and wrapping of the target mock-up) for the simulator validation. The simulator has been properly

  10. Development and validation of the simulation-based learning evaluation scale.

    PubMed

    Hung, Chang-Chiao; Liu, Hsiu-Chen; Lin, Chun-Chih; Lee, Bih-O

    2016-05-01

    The instruments that evaluate a student's perception of receiving simulated training are English versions and have not been tested for reliability or validity. The aim of this study was to develop and validate a Chinese version Simulation-Based Learning Evaluation Scale (SBLES). Four stages were conducted to develop and validate the SBLES. First, specific desired competencies were identified according to the National League for Nursing and Taiwan Nursing Accreditation Council core competencies. Next, the initial item pool was comprised of 50 items related to simulation that were drawn from the literature of core competencies. Content validity was established by use of an expert panel. Finally, exploratory factor analysis and confirmatory factor analysis were conducted for construct validity, and Cronbach's coefficient alpha determined the scale's internal consistency reliability. Two hundred and fifty students who had experienced simulation-based learning were invited to participate in this study. Two hundred and twenty-five students completed and returned questionnaires (response rate=90%). Six items were deleted from the initial item pool and one was added after an expert panel review. Exploratory factor analysis with varimax rotation revealed 37 items remaining in five factors which accounted for 67% of the variance. The construct validity of SBLES was substantiated in a confirmatory factor analysis that revealed a good fit of the hypothesized factor structure. The findings tally with the criterion of convergent and discriminant validity. The range of internal consistency for five subscales was .90 to .93. Items were rated on a 5-point scale from 1 (strongly disagree) to 5 (strongly agree). The results of this study indicate that the SBLES is valid and reliable. The authors recommend that the scale could be applied in the nursing school to evaluate the effectiveness of simulation-based learning curricula. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Challenges in validating model results for first year ice

    NASA Astrophysics Data System (ADS)

    Melsom, Arne; Eastwood, Steinar; Xie, Jiping; Aaboe, Signe; Bertino, Laurent

    2017-04-01

    In order to assess the quality of model results for the distribution of first year ice, a comparison with a product based on observations from satellite-borne instruments has been performed. Such a comparison is not straightforward due to the contrasting algorithms that are used in the model product and the remote sensing product. The implementation of the validation is discussed in light of the differences between this set of products, and validation results are presented. The model product is the daily updated 10-day forecast from the Arctic Monitoring and Forecasting Centre in CMEMS. The forecasts are produced with the assimilative ocean prediction system TOPAZ. Presently, observations of sea ice concentration and sea ice drift are introduced in the assimilation step, but data for sea ice thickness and ice age (or roughness) are not included. The model computes the age of the ice by recording and updating the time passed after ice formation as sea ice grows and deteriorates as it is advected inside the model domain. Ice that is younger than 365 days is classified as first year ice. The fraction of first-year ice is recorded as a tracer in each grid cell. The Ocean and Sea Ice Thematic Assembly Centre in CMEMS redistributes a daily product from the EUMETSAT OSI SAF of gridded sea ice conditions which include "ice type", a representation of the separation of regions between those infested by first year ice, and those infested by multi-year ice. The ice type is parameterized based on data for the gradient ratio GR(19,37) from SSMIS observations, and from the ASCAT backscatter parameter. This product also includes information on ambiguity in the processing of the remote sensing data, and the product's confidence level, which have a strong seasonal dependency.

  12. Competency-Based Training and Simulation: Making a "Valid" Argument.

    PubMed

    Noureldin, Yasser A; Lee, Jason Y; McDougall, Elspeth M; Sweet, Robert M

    2018-02-01

    The use of simulation as an assessment tool is much more controversial than is its utility as an educational tool. However, without valid simulation-based assessment tools, the ability to objectively assess technical skill competencies in a competency-based medical education framework will remain challenging. The current literature in urologic simulation-based training and assessment uses a definition and framework of validity that is now outdated. This is probably due to the absence of awareness rather than an absence of comprehension. The following review article provides the urologic community an updated taxonomy on validity theory as it relates to simulation-based training and assessments and translates our simulation literature to date into this framework. While the old taxonomy considered validity as distinct subcategories and focused on the simulator itself, the modern taxonomy, for which we translate the literature evidence, considers validity as a unitary construct with a focus on interpretation of simulator data/scores.

  13. A Possible Tool for Checking Errors in the INAA Results, Based on Neutron Data and Method Validation

    NASA Astrophysics Data System (ADS)

    Cincu, Em.; Grigore, Ioana Manea; Barbos, D.; Cazan, I. L.; Manu, V.

    2008-08-01

    This work presents preliminary results of a new type of possible application in the INAA experiments of elemental analysis, useful to check errors occurred during investigation of unknown samples; it relies on the INAA method validation experiments and accuracy of the neutron data from the literature. The paper comprises 2 sections, the first one presents—in short—the steps of the experimental tests carried out for INAA method validation and for establishing the `ACTIVA-N' laboratory performance, which is-at the same time-an illustration of the laboratory evolution on the way to get performance. Section 2 presents our recent INAA results on CRMs, of which interpretation opens discussions about the usefulness of using a tool for checking possible errors, different from the usual statistical procedures. The questionable aspects and the requirements to develop a practical checking tool are discussed.

  14. Feasibility and relative validity of a digital photo-based dietary assessment: results from the Nutris-Phone study.

    PubMed

    Prinz, Nicole; Bohn, Barbara; Kern, Annamarie; Püngel, Deborah; Pollatos, Olga; Holl, Reinhard W

    2018-03-06

    For dietary assessment, mobile devices with a camera can be used as an alternative to hand-written paper records. The Nutritional Tracking Information Smartphone (Nutris-Phone) study aimed to examine relative validity and feasibility of a photo-based dietary record in everyday life. Parallel to the photo-based technique, a weighed record was performed. Participant satisfaction was assessed by questionnaire. A trained nutrition scientist evaluated portion sizes and nutrient content was calculated (DGExpert). Spearman correlation and Bland-Altman analyses were applied. Healthy, non-pregnant volunteers (≥18 years) without intent to lose weight recruited at Ulm University, Germany. Sixty-six participants (36 % males, median age 22·0 (interquartile range 20·0-25·0) years) took pictures of foods and beverages consumed with a commercially available mobile phone. Significant correlation between the photo-based and weighed record was observed: energy (r=0·991), carbohydrate (r=0·980), fat (r=0·972), protein (r=0·988), fibre (r=0·941). Bland-Altman analyses indicated comparable means and acceptable 95 % limits of agreement (energy: -345·2 to 302·9 kJ (-82·5 to 72·4 kcal); carbohydrate: -15·2 to 13·1 g; fat: -6·4 to 6·4 g; protein: -5·9 to 5·6 g; fibre: -2·7 to 2·5 g). However, with increasing intake level, underestimation by the digital method was present (except for fat, all P<0·01). Over 80 % of participants were satisfied with the photo-based record. In nearly 90 %, technical implementation was without major problems. Compared with a weighed record, the photo-based dietary record seems to be valid, feasible and user-friendly to estimate energy, macronutrient and fibre intakes, although a systematic bias with increasing levels of intake should be kept in mind.

  15. 10 CFR 26.139 - Reporting initial validity and drug test results.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Reporting initial validity and drug test results. 26.139... § 26.139 Reporting initial validity and drug test results. (a) The licensee testing facility shall... permitted under § 26.75(h), positive test results from initial drug tests at the licensee testing facility...

  16. 10 CFR 26.139 - Reporting initial validity and drug test results.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Reporting initial validity and drug test results. 26.139... § 26.139 Reporting initial validity and drug test results. (a) The licensee testing facility shall... permitted under § 26.75(h), positive test results from initial drug tests at the licensee testing facility...

  17. Methodology for testing and validating knowledge bases

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  18. 42 CFR 476.84 - Changes as a result of DRG validation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Changes as a result of DRG validation. 476.84... § 476.84 Changes as a result of DRG validation. A provider or practitioner may obtain a review by a QIO under part 473 of this chapter for changes in diagnostic and procedural coding that resulted in a change...

  19. 42 CFR 476.84 - Changes as a result of DRG validation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Changes as a result of DRG validation. 476.84... § 476.84 Changes as a result of DRG validation. A provider or practitioner may obtain a review by a QIO under part 473 of this chapter for changes in diagnostic and procedural coding that resulted in a change...

  20. 42 CFR 476.84 - Changes as a result of DRG validation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Changes as a result of DRG validation. 476.84... § 476.84 Changes as a result of DRG validation. A provider or practitioner may obtain a review by a QIO under part 473 of this chapter for changes in diagnostic and procedural coding that resulted in a change...

  1. 42 CFR 476.84 - Changes as a result of DRG validation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Changes as a result of DRG validation. 476.84... § 476.84 Changes as a result of DRG validation. A provider or practitioner may obtain a review by a QIO under part 473 of this chapter for changes in diagnostic and procedural coding that resulted in a change...

  2. Validation and upgrading of physically based mathematical models

    NASA Technical Reports Server (NTRS)

    Duval, Ronald

    1992-01-01

    The validation of the results of physically-based mathematical models against experimental results was discussed. Systematic techniques are used for: (1) isolating subsets of the simulator mathematical model and comparing the response of each subset to its experimental response for the same input conditions; (2) evaluating the response error to determine whether it is the result of incorrect parameter values, incorrect structure of the model subset, or unmodeled external effects of cross coupling; and (3) modifying and upgrading the model and its parameter values to determine the most physically appropriate combination of changes.

  3. Building validation tools for knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Stachowitz, R. A.; Chang, C. L.; Stock, T. S.; Combs, J. B.

    1987-01-01

    The Expert Systems Validation Associate (EVA), a validation system under development at the Lockheed Artificial Intelligence Center for more than a year, provides a wide range of validation tools to check the correctness, consistency and completeness of a knowledge-based system. A declarative meta-language (higher-order language), is used to create a generic version of EVA to validate applications written in arbitrary expert system shells. The architecture and functionality of EVA are presented. The functionality includes Structure Check, Logic Check, Extended Structure Check (using semantic information), Extended Logic Check, Semantic Check, Omission Check, Rule Refinement, Control Check, Test Case Generation, Error Localization, and Behavior Verification.

  4. The Validation of a Case-Based, Cumulative Assessment and Progressions Examination

    PubMed Central

    Coker, Adeola O.; Copeland, Jeffrey T.; Gottlieb, Helmut B.; Horlen, Cheryl; Smith, Helen E.; Urteaga, Elizabeth M.; Ramsinghani, Sushma; Zertuche, Alejandra; Maize, David

    2016-01-01

    Objective. To assess content and criterion validity, as well as reliability of an internally developed, case-based, cumulative, high-stakes third-year Annual Student Assessment and Progression Examination (P3 ASAP Exam). Methods. Content validity was assessed through the writing-reviewing process. Criterion validity was assessed by comparing student scores on the P3 ASAP Exam with the nationally validated Pharmacy Curriculum Outcomes Assessment (PCOA). Reliability was assessed with psychometric analysis comparing student performance over four years. Results. The P3 ASAP Exam showed content validity through representation of didactic courses and professional outcomes. Similar scores on the P3 ASAP Exam and PCOA with Pearson correlation coefficient established criterion validity. Consistent student performance using Kuder-Richardson coefficient (KR-20) since 2012 reflected reliability of the examination. Conclusion. Pharmacy schools can implement internally developed, high-stakes, cumulative progression examinations that are valid and reliable using a robust writing-reviewing process and psychometric analyses. PMID:26941435

  5. 42 CFR 478.15 - QIO review of changes resulting from DRG validation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false QIO review of changes resulting from DRG validation... review of changes resulting from DRG validation. (a) General rules. (1) A provider or practitioner... validation under section 1866(a)(1)(F) of the Act is entitled to a review of that change if— (i) The change...

  6. 42 CFR 478.15 - QIO review of changes resulting from DRG validation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false QIO review of changes resulting from DRG validation... review of changes resulting from DRG validation. (a) General rules. (1) A provider or practitioner... validation under section 1866(a)(1)(F) of the Act is entitled to a review of that change if— (i) The change...

  7. 42 CFR 478.15 - QIO review of changes resulting from DRG validation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false QIO review of changes resulting from DRG validation... review of changes resulting from DRG validation. (a) General rules. (1) A provider or practitioner... validation under section 1866(a)(1)(F) of the Act is entitled to a review of that change if— (i) The change...

  8. 42 CFR 478.15 - QIO review of changes resulting from DRG validation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false QIO review of changes resulting from DRG validation... review of changes resulting from DRG validation. (a) General rules. (1) A provider or practitioner... validation under section 1866(a)(1)(F) of the Act is entitled to a review of that change if— (i) The change...

  9. Validation Database Based Thermal Analysis of an Advanced RPS Concept

    NASA Technical Reports Server (NTRS)

    Balint, Tibor S.; Emis, Nickolas D.

    2006-01-01

    Advanced RPS concepts can be conceived, designed and assessed using high-end computational analysis tools. These predictions may provide an initial insight into the potential performance of these models, but verification and validation are necessary and required steps to gain confidence in the numerical analysis results. This paper discusses the findings from a numerical validation exercise for a small advanced RPS concept, based on a thermal analysis methodology developed at JPL and on a validation database obtained from experiments performed at Oregon State University. Both the numerical and experimental configurations utilized a single GPHS module enabled design, resembling a Mod-RTG concept. The analysis focused on operating and environmental conditions during the storage phase only. This validation exercise helped to refine key thermal analysis and modeling parameters, such as heat transfer coefficients, and conductivity and radiation heat transfer values. Improved understanding of the Mod-RTG concept through validation of the thermal model allows for future improvements to this power system concept.

  10. 42 CFR 493.571 - Disclosure of accreditation, State and CMS validation inspection results.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... validation inspection results. 493.571 Section 493.571 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES... Program § 493.571 Disclosure of accreditation, State and CMS validation inspection results. (a... licensure program, in accordance with State law. (c) CMS validation inspection results. CMS may disclose the...

  11. 42 CFR 493.571 - Disclosure of accreditation, State and CMS validation inspection results.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... validation inspection results. 493.571 Section 493.571 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES... Program § 493.571 Disclosure of accreditation, State and CMS validation inspection results. (a... licensure program, in accordance with State law. (c) CMS validation inspection results. CMS may disclose the...

  12. 42 CFR 493.571 - Disclosure of accreditation, State and CMS validation inspection results.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... validation inspection results. 493.571 Section 493.571 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES... Program § 493.571 Disclosure of accreditation, State and CMS validation inspection results. (a... licensure program, in accordance with State law. (c) CMS validation inspection results. CMS may disclose the...

  13. 42 CFR 493.571 - Disclosure of accreditation, State and CMS validation inspection results.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... validation inspection results. 493.571 Section 493.571 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES... Program § 493.571 Disclosure of accreditation, State and CMS validation inspection results. (a... licensure program, in accordance with State law. (c) CMS validation inspection results. CMS may disclose the...

  14. 42 CFR 493.571 - Disclosure of accreditation, State and CMS validation inspection results.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... validation inspection results. 493.571 Section 493.571 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES... Program § 493.571 Disclosure of accreditation, State and CMS validation inspection results. (a... licensure program, in accordance with State law. (c) CMS validation inspection results. CMS may disclose the...

  15. 10 CFR 26.139 - Reporting initial validity and drug test results.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Reporting initial validity and drug test results. 26.139 Section 26.139 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Licensee Testing Facilities § 26.139 Reporting initial validity and drug test results. (a) The licensee testing facility shall...

  16. 10 CFR 26.139 - Reporting initial validity and drug test results.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Reporting initial validity and drug test results. 26.139 Section 26.139 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Licensee Testing Facilities § 26.139 Reporting initial validity and drug test results. (a) The licensee testing facility shall...

  17. 10 CFR 26.139 - Reporting initial validity and drug test results.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Reporting initial validity and drug test results. 26.139 Section 26.139 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Licensee Testing Facilities § 26.139 Reporting initial validity and drug test results. (a) The licensee testing facility shall...

  18. Exploring discrepancies between quantitative validation results and the geomorphic plausibility of statistical landslide susceptibility maps

    NASA Astrophysics Data System (ADS)

    Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas

    2016-06-01

    Empirical models are frequently applied to produce landslide susceptibility maps for large areas. Subsequent quantitative validation results are routinely used as the primary criteria to infer the validity and applicability of the final maps or to select one of several models. This study hypothesizes that such direct deductions can be misleading. The main objective was to explore discrepancies between the predictive performance of a landslide susceptibility model and the geomorphic plausibility of subsequent landslide susceptibility maps while a particular emphasis was placed on the influence of incomplete landslide inventories on modelling and validation results. The study was conducted within the Flysch Zone of Lower Austria (1,354 km2) which is known to be highly susceptible to landslides of the slide-type movement. Sixteen susceptibility models were generated by applying two statistical classifiers (logistic regression and generalized additive model) and two machine learning techniques (random forest and support vector machine) separately for two landslide inventories of differing completeness and two predictor sets. The results were validated quantitatively by estimating the area under the receiver operating characteristic curve (AUROC) with single holdout and spatial cross-validation technique. The heuristic evaluation of the geomorphic plausibility of the final results was supported by findings of an exploratory data analysis, an estimation of odds ratios and an evaluation of the spatial structure of the final maps. The results showed that maps generated by different inventories, classifiers and predictors appeared differently while holdout validation revealed similar high predictive performances. Spatial cross-validation proved useful to expose spatially varying inconsistencies of the modelling results while additionally providing evidence for slightly overfitted machine learning-based models. However, the highest predictive performances were obtained for

  19. Validation of virtual-reality-based simulations for endoscopic sinus surgery.

    PubMed

    Dharmawardana, N; Ruthenbeck, G; Woods, C; Elmiyeh, B; Diment, L; Ooi, E H; Reynolds, K; Carney, A S

    2015-12-01

    Virtual reality (VR) simulators provide an alternative to real patients for practicing surgical skills but require validation to ensure accuracy. Here, we validate the use of a virtual reality sinus surgery simulator with haptic feedback for training in Otorhinolaryngology - Head & Neck Surgery (OHNS). Participants were recruited from final-year medical students, interns, resident medical officers (RMOs), OHNS registrars and consultants. All participants completed an online questionnaire after performing four separate simulation tasks. These were then used to assess face, content and construct validity. anova with post hoc correlation was used for statistical analysis. The following groups were compared: (i) medical students/interns, (ii) RMOs, (iii) registrars and (iv) consultants. Face validity results had a statistically significant (P < 0.05) difference between the consultant group and others, while there was no significant difference between medical student/intern and RMOs. Variability within groups was not significant. Content validity results based on consultant scoring and comments indicated that the simulations need further development in several areas to be effective for registrar-level teaching. However, students, interns and RMOs indicated that the simulations provide a useful tool for learning OHNS-related anatomy and as an introduction to ENT-specific procedures. The VR simulations have been validated for teaching sinus anatomy and nasendoscopy to medical students, interns and RMOs. However, they require further development before they can be regarded as a valid tool for more advanced surgical training. © 2015 John Wiley & Sons Ltd.

  20. Analysis procedures and subjective flight results of a simulator validation and cue fidelity experiment

    NASA Technical Reports Server (NTRS)

    Carr, Peter C.; Mckissick, Burnell T.

    1988-01-01

    A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.

  1. Results and current status of the NPARC alliance validation effort

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Jones, Ralph R.

    1996-01-01

    The NPARC Alliance is a partnership between the NASA Lewis Research Center (LeRC) and the USAF Arnold Engineering Development Center (AEDC) dedicated to the establishment of a national CFD capability, centered on the NPARC Navier-Stokes computer program. The three main tasks of the Alliance are user support, code development, and validation. The present paper is a status report on the validation effort. It describes the validation approach being taken by the Alliance. Representative results are presented for laminar and turbulent flat plate boundary layers, a supersonic axisymmetric jet, and a glancing shock/turbulent boundary layer interaction. Cases scheduled to be run in the future are also listed. The archive of validation cases is described, including information on how to access it via the Internet.

  2. Validity and reliability of Internet-based physiotherapy assessment for musculoskeletal disorders: a systematic review.

    PubMed

    Mani, Suresh; Sharma, Shobha; Omar, Baharudin; Paungmali, Aatit; Joseph, Leonard

    2017-04-01

    Purpose The purpose of this review is to systematically explore and summarise the validity and reliability of telerehabilitation (TR)-based physiotherapy assessment for musculoskeletal disorders. Method A comprehensive systematic literature review was conducted using a number of electronic databases: PubMed, EMBASE, PsycINFO, Cochrane Library and CINAHL, published between January 2000 and May 2015. The studies examined the validity, inter- and intra-rater reliabilities of TR-based physiotherapy assessment for musculoskeletal conditions were included. Two independent reviewers used the Quality Appraisal Tool for studies of diagnostic Reliability (QAREL) and the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool to assess the methodological quality of reliability and validity studies respectively. Results A total of 898 hits were achieved, of which 11 articles based on inclusion criteria were reviewed. Nine studies explored the concurrent validity, inter- and intra-rater reliabilities, while two studies examined only the concurrent validity. Reviewed studies were moderate to good in methodological quality. The physiotherapy assessments such as pain, swelling, range of motion, muscle strength, balance, gait and functional assessment demonstrated good concurrent validity. However, the reported concurrent validity of lumbar spine posture, special orthopaedic tests, neurodynamic tests and scar assessments ranged from low to moderate. Conclusion TR-based physiotherapy assessment was technically feasible with overall good concurrent validity and excellent reliability, except for lumbar spine posture, orthopaedic special tests, neurodynamic testa and scar assessment.

  3. Smartphone based automatic organ validation in ultrasound video.

    PubMed

    Vaish, Pallavi; Bharath, R; Rajalakshmi, P

    2017-07-01

    Telesonography involves transmission of ultrasound video from remote areas to the doctors for getting diagnosis. Due to the lack of trained sonographers in remote areas, the ultrasound videos scanned by these untrained persons do not contain the proper information that is required by a physician. As compared to standard methods for video transmission, mHealth driven systems need to be developed for transmitting valid medical videos. To overcome this problem, we are proposing an organ validation algorithm to evaluate the ultrasound video based on the content present. This will guide the semi skilled person to acquire the representative data from patient. Advancement in smartphone technology allows us to perform high medical image processing on smartphone. In this paper we have developed an Application (APP) for a smartphone which can automatically detect the valid frames (which consist of clear organ visibility) in an ultrasound video and ignores the invalid frames (which consist of no-organ visibility), and produces a compressed sized video. This is done by extracting the GIST features from the Region of Interest (ROI) of the frame and then classifying the frame using SVM classifier with quadratic kernel. The developed application resulted with the accuracy of 94.93% in classifying valid and invalid images.

  4. Development and Validation of a Mobile Device-based External Ventricular Drain Simulator.

    PubMed

    Morone, Peter J; Bekelis, Kimon; Root, Brandon K; Singer, Robert J

    2017-10-01

    Multiple external ventricular drain (EVD) simulators have been created, yet their cost, bulky size, and nonreusable components limit their accessibility to residency programs. To create and validate an animated EVD simulator that is accessible on a mobile device. We developed a mobile-based EVD simulator that is compatible with iOS (Apple Inc., Cupertino, California) and Android-based devices (Google, Mountain View, California) and can be downloaded from the Apple App and Google Play Store. Our simulator consists of a learn mode, which teaches users the procedure, and a test mode, which assesses users' procedural knowledge. Twenty-eight participants, who were divided into expert and novice categories, completed the simulator in test mode and answered a postmodule survey. This was graded using a 5-point Likert scale, with 5 representing the highest score. Using the survey results, we assessed the module's face and content validity, whereas construct validity was evaluated by comparing the expert and novice test scores. Participants rated individual survey questions pertaining to face and content validity a median score of 4 out of 5. When comparing test scores, generated by the participants completing the test mode, the experts scored higher than the novices (mean, 71.5; 95% confidence interval, 69.2 to 73.8 vs mean, 48; 95% confidence interval, 44.2 to 51.6; P < .001). We created a mobile-based EVD simulator that is inexpensive, reusable, and accessible. Our results demonstrate that this simulator is face, content, and construct valid. Copyright © 2017 by the Congress of Neurological Surgeons

  5. Validating crash locations for quantitative spatial analysis: a GIS-based approach.

    PubMed

    Loo, Becky P Y

    2006-09-01

    In this paper, the spatial variables of the crash database in Hong Kong from 1993 to 2004 are validated. The proposed spatial data validation system makes use of three databases (the crash, road network and district board databases) and relies on GIS to carry out most of the validation steps so that the human resource required for manually checking the accuracy of the spatial data can be enormously reduced. With the GIS-based spatial data validation system, it was found that about 65-80% of the police crash records from 1993 to 2004 had correct road names and district board information. In 2004, the police crash database contained about 12.7% mistakes for road names and 9.7% mistakes for district boards. The situation was broadly comparable to the United Kingdom. However, the results also suggest that safety researchers should carefully validate spatial data in the crash database before scientific analysis.

  6. Remote sensing albedo product validation over heterogenicity surface based on WSN: preliminary results and its uncertainty

    NASA Astrophysics Data System (ADS)

    Wu, Xiaodan; Wen, Jianguang; Xiao, Qing; Peng, Jingjing; Liu, Qiang; Dou, Baocheng; Tang, Yong; Li, Xiuhong

    2014-11-01

    The evaluation of uncertainty in satellite-derived albedo products is critical to ensure their accuracy, stability and consistency for studying climate change. In this study, we assess the Moderate-resolution Imaging Spectroradiometer(MODIS) albedo 8 day standard product MOD43B3 using the ground-based albedometer measurement based on the wireless sensor network (WSN) technology. The experiment have been performed in Huailai, Hubei province. A 1.5 km*2 km area are selected as study region, which locates between 115.78° E-115.80° E and 40.35° N-40.37° N. This area is characterized by its distinct landscapes: bare ground between January and April, corn from May to Octorber. That is, this area is relatively homegeneous from January to Octorber, but in Novermber and December, the surface is very heterogeneous because of straw burning, as well as snow fall and snow melting. It is a big challenge to validate the MODIS albedo products because of the vast difference in spatial resolution between ground measurement and satellite measurement. Here, we use the HJ albedo products as the bridge that link the ground measurement with satellite data. Firstly, we analyses the spatial representativeness of the WSN site under green-up, dormant and snow covered situations to decide whether direct comparison between ground-based measurement and MODIS albedo can be made. The semivariogram is used here to describe the ground hetergeneity around the WSN site. In addition, the bias between the average albedo of the certain neighborhood centered at the WSN site and the center pixel albedo is also calculated.Then we compare the MOD43B3 value with the ground-based value. Result shows that MOD43B3 agree with in situ well during the growing season, however, there are relatively large difference between ground albedos and MCD43B3 albedos during dormant and snow-coverd periods.

  7. 42 CFR 478.15 - QIO review of changes resulting from DRG validation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false QIO review of changes resulting from DRG validation... review of changes resulting from DRG validation. (a) General rules. (1) A provider or practitioner dissatisfied with a change to the diagnostic or procedural coding information made by a QIO as a result of DRG...

  8. In Defense of an Instrument-Based Approach to Validity

    ERIC Educational Resources Information Center

    Hood, S. Brian

    2012-01-01

    Paul E. Newton argues in favor of a conception of validity, viz, "the consensus definition of validity," according to which the extension of the predicate "is valid" is a subset of "assessment-based decision-making procedure[s], which [are] underwritten by an argument that the assessment procedure can be used to measure the attribute entailed by…

  9. Development and validation of an instrument for evaluating inquiry-based tasks in science textbooks

    NASA Astrophysics Data System (ADS)

    Yang, Wenyuan; Liu, Enshan

    2016-12-01

    This article describes the development and validation of an instrument that can be used for content analysis of inquiry-based tasks. According to the theories of educational evaluation and qualities of inquiry, four essential functions that inquiry-based tasks should serve are defined: (1) assisting in the construction of understandings about scientific concepts, (2) providing students opportunities to use inquiry process skills, (3) being conducive to establishing understandings about scientific inquiry, and (4) giving students opportunities to develop higher order thinking skills. An instrument - the Inquiry-Based Tasks Analysis Inventory (ITAI) - was developed to judge whether inquiry-based tasks perform these functions well. To test the reliability and validity of the ITAI, 4 faculty members were invited to use the ITAI to collect data from 53 inquiry-based tasks in the 3 most widely adopted senior secondary biology textbooks in Mainland China. The results indicate that (1) the inter-rater reliability reached 87.7%, (2) the grading criteria have high discriminant validity, (3) the items possess high convergent validity, and (4) the Cronbach's alpha reliability coefficient reached 0.792. The study concludes that the ITAI is valid and reliable. Because of its solid foundations in theoretical and empirical argumentation, the ITAI is trustworthy.

  10. Validation of microbiological testing in cardiovascular tissue banks: results of a quality round trial.

    PubMed

    de By, Theo M M H; McDonald, Carl; Süßner, Susanne; Davies, Jill; Heng, Wee Ling; Jashari, Ramadan; Bogers, Ad J J C; Petit, Pieter

    2017-11-01

    Surgeons needing human cardiovascular tissue for implantation in their patients are confronted with cardiovascular tissue banks that use different methods to identify and decontaminate micro-organisms. To elucidate these differences, we compared the quality of processing methods in 20 tissue banks and 1 reference laboratory. We did this to validate the results for accepting or rejecting tissue. We included the decontamination methods used and the influence of antibiotic cocktails and residues with results and controls. The minor details of the processes were not included. To compare the outcomes of microbiological testing and decontamination methods of heart valve allografts in cardiovascular tissue banks, an international quality round was organized. Twenty cardiovascular tissue banks participated in this quality round. The quality round method was validated first and consisted of sending purposely contaminated human heart valve tissue samples with known micro-organisms to the participants. The participants identified the micro-organisms using their local decontamination methods. Seventeen of the 20 participants correctly identified the micro-organisms; if these samples were heart valves to be released for implantation, 3 of the 20 participants would have decided to accept their result for release. Decontamination was shown not to be effective in 13 tissue banks because of growth of the organisms after decontamination. Articles in the literature revealed that antibiotics are effective at 36°C and not, or less so, at 2-8°C. The decontamination procedure, if it is validated, will ensure that the tissue contains no known micro-organisms. This study demonstrates that the quality round method of sending contaminated tissues and assessing the results of the microbiological cultures is an effective way of validating the processes of tissue banks. Only when harmonization, based on validated methods, has been achieved, will surgeons be able to fully rely on the methods

  11. The Development and Validation of the Mood-based Indoor Tanning Scale.

    PubMed

    Carcioppolo, Nick; Chen, Yixin; John, Kevin K; Gonzalez, Andrea Martinez; King, Andy J; Morgan, Susan E; Hu, Shasa

    2017-01-01

    Research indicates that mood-based motivations may be an important predictor of indoor tanning bed use and may be related to indoor tanning dependence. Problematically, little research has been conducted to develop a psychometric measure of mood-based tanning motivations. The current study seeks to develop and validate the moodbased indoor tanning scale (MITS). Two studies were conducted to identify and verify the MITS factor structure as well as assess construct validity. Study 1 was conducted at 5 geographically diverse universities in the United States. Study 2 was conducted by using a national online sample in the United States. Results from study 1 specified the factor structure of the MITS. Results from study 2 suggest that a one-point increase in the MITS measure corresponds with using indoor tanning beds 11 more times in the past year. These findings demonstrate that moodbased tanning motivations are a strong predictor of indoor tanning intentions and behavior. Further, they suggest that health behavior researchers and healthcare practitioners can use the MITS to assess the extent to which mood-based motivations impact indoor tanning bed use.

  12. Generation of Human Induced Pluripotent Stem Cells Using RNA-Based Sendai Virus System and Pluripotency Validation of the Resulting Cell Population.

    PubMed

    Chichagova, Valeria; Sanchez-Vera, Irene; Armstrong, Lyle; Steel, David; Lako, Majlinda

    2016-01-01

    Human induced pluripotent stem cells (hiPSCs) provide a platform for studying human disease in vitro, increase our understanding of human embryonic development, and provide clinically relevant cell types for transplantation, drug testing, and toxicology studies. Since their discovery, numerous advances have been made in order to eliminate issues such as vector integration into the host genome, low reprogramming efficiency, incomplete reprogramming and acquisition of genomic instabilities. One of the ways to achieve integration-free reprogramming is by using RNA-based Sendai virus. Here we describe a method to generate hiPSCs with Sendai virus in both feeder-free and feeder-dependent culture systems. Additionally, we illustrate methods by which to validate pluripotency of the resulting stem cell population.

  13. Simulated Driving Assessment (SDA) for Teen Drivers: Results from a Validation Study

    PubMed Central

    McDonald, Catherine C.; Kandadai, Venk; Loeb, Helen; Seacrist, Thomas S.; Lee, Yi-Ching; Winston, Zachary; Winston, Flaura K.

    2015-01-01

    Background Driver error and inadequate skill are common critical reasons for novice teen driver crashes, yet few validated, standardized assessments of teen driving skills exist. The purpose of this study was to evaluate the construct and criterion validity of a newly developed Simulated Driving Assessment (SDA) for novice teen drivers. Methods The SDA's 35-minute simulated drive incorporates 22 variations of the most common teen driver crash configurations. Driving performance was compared for 21 inexperienced teens (age 16–17 years, provisional license ≤90 days) and 17 experienced adults (age 25–50 years, license ≥5 years, drove ≥100 miles per week, no collisions or moving violations ≤3 years). SDA driving performance (Error Score) was based on driving safety measures derived from simulator and eye-tracking data. Negative driving outcomes included simulated collisions or run-off-the-road incidents. A professional driving evaluator/instructor reviewed videos of SDA performance (DEI Score). Results The SDA demonstrated construct validity: 1.) Teens had a higher Error Score than adults (30 vs. 13, p=0.02); 2.) For each additional error committed, the relative risk of a participant's propensity for a simulated negative driving outcome increased by 8% (95% CI: 1.05–1.10, p<0.01). The SDA demonstrated criterion validity: Error Score was correlated with DEI Score (r=−0.66, p<0.001). Conclusions This study supports the concept of validated simulated driving tests like the SDA to assess novice driver skill in complex and hazardous driving scenarios. The SDA, as a standard protocol to evaluate teen driver performance, has the potential to facilitate screening and assessment of teen driving readiness and could be used to guide targeted skill training. PMID:25740939

  14. ASTER Global Digital Elevation Model Version 2 - summary of validation results

    USGS Publications Warehouse

    Tachikawa, Tetushi; Kaku, Manabu; Iwasaki, Akira; Gesch, Dean B.; Oimoen, Michael J.; Zhang, Z.; Danielson, Jeffrey J.; Krieger, Tabatha; Curtis, Bill; Haase, Jeff; Abrams, Michael; Carabajal, C.; Meyer, Dave

    2011-01-01

    Based on these findings, the GDEM validation team recommends the release of the GDEM2 to the public, acknowledging that, while vastly improved, some artifacts still exist which could affect its utility in certain applications.

  15. Model-based verification and validation of the SMAP uplink processes

    NASA Astrophysics Data System (ADS)

    Khan, M. O.; Dubos, G. F.; Tirona, J.; Standley, S.

    Model-Based Systems Engineering (MBSE) is being used increasingly within the spacecraft design community because of its benefits when compared to document-based approaches. As the complexity of projects expands dramatically with continually increasing computational power and technology infusion, the time and effort needed for verification and validation (V& V) increases geometrically. Using simulation to perform design validation with system-level models earlier in the life cycle stands to bridge the gap between design of the system (based on system-level requirements) and verifying those requirements/validating the system as a whole. This case study stands as an example of how a project can validate a system-level design earlier in the project life cycle than traditional V& V processes by using simulation on a system model. Specifically, this paper describes how simulation was added to a system model of the Soil Moisture Active-Passive (SMAP) mission's uplink process. Also discussed are the advantages and disadvantages of the methods employed and the lessons learned; which are intended to benefit future model-based and simulation-based development efforts.

  16. Policy and Validity Prospects for Performance-Based Assessment.

    ERIC Educational Resources Information Center

    Baker, Eva L.; And Others

    1994-01-01

    This article describes performance-based assessment as expounded by its proponents, comments on these conceptions, reviews evidence regarding the technical quality of performance-based assessment, and considers its validity under various policy options. (JDD)

  17. SDG and qualitative trend based model multiple scale validation

    NASA Astrophysics Data System (ADS)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  18. Model-Based Verification and Validation of Spacecraft Avionics

    NASA Technical Reports Server (NTRS)

    Khan, M. Omair; Sievers, Michael; Standley, Shaun

    2012-01-01

    Verification and Validation (V&V) at JPL is traditionally performed on flight or flight-like hardware running flight software. For some time, the complexity of avionics has increased exponentially while the time allocated for system integration and associated V&V testing has remained fixed. There is an increasing need to perform comprehensive system level V&V using modeling and simulation, and to use scarce hardware testing time to validate models; the norm for thermal and structural V&V for some time. Our approach extends model-based V&V to electronics and software through functional and structural models implemented in SysML. We develop component models of electronics and software that are validated by comparison with test results from actual equipment. The models are then simulated enabling a more complete set of test cases than possible on flight hardware. SysML simulations provide access and control of internal nodes that may not be available in physical systems. This is particularly helpful in testing fault protection behaviors when injecting faults is either not possible or potentially damaging to the hardware. We can also model both hardware and software behaviors in SysML, which allows us to simulate hardware and software interactions. With an integrated model and simulation capability we can evaluate the hardware and software interactions and identify problems sooner. The primary missing piece is validating SysML model correctness against hardware; this experiment demonstrated such an approach is possible.

  19. Empirical validation of an agent-based model of wood markets in Switzerland

    PubMed Central

    Hilty, Lorenz M.; Lemm, Renato; Thees, Oliver

    2018-01-01

    We present an agent-based model of wood markets and show our efforts to validate this model using empirical data from different sources, including interviews, workshops, experiments, and official statistics. Own surveys closed gaps where data was not available. Our approach to model validation used a variety of techniques, including the replication of historical production amounts, prices, and survey results, as well as a historical case study of a large sawmill entering the market and becoming insolvent only a few years later. Validating the model using this case provided additional insights, showing how the model can be used to simulate scenarios of resource availability and resource allocation. We conclude that the outcome of the rigorous validation qualifies the model to simulate scenarios concerning resource availability and allocation in our study region. PMID:29351300

  20. Validity of Scientific Based Chemistry Android Module to Empower Science Process Skills (SPS) in Solubility Equilibrium

    NASA Astrophysics Data System (ADS)

    Antrakusuma, B.; Masykuri, M.; Ulfa, M.

    2018-04-01

    Evolution of Android technology can be applied to chemistry learning, one of the complex chemistry concept was solubility equilibrium. this concept required the science process skills (SPS). This study aims to: 1) Characteristic scientific based chemistry Android module to empowering SPS, and 2) Validity of the module based on content validity and feasibility test. This research uses a Research and Development approach (RnD). Research subjects were 135 s1tudents and three teachers at three high schools in Boyolali, Central of Java. Content validity of the module was tested by seven experts using Aiken’s V technique, and the module feasibility was tested to students and teachers in each school. Characteristics of chemistry module can be accessed using the Android device. The result of validation of the module contents got V = 0.89 (Valid), and the results of the feasibility test Obtained 81.63% (by the student) and 73.98% (by the teacher) indicates this module got good criteria.

  1. Validating a Theory-Based Survey to Evaluate Teaching Effectiveness in Higher Education

    ERIC Educational Resources Information Center

    Amrein-Beardsley, A.; Haladyna, T.

    2012-01-01

    Surveys to evaluate instructor effectiveness are commonly used in higher education. Yet the survey items included are often drawn from other surveys without reference to a theory of adult learning. The authors present the results from a validation study of such a theory-based survey. They evidence that an evaluation survey based on a theory that…

  2. Validation of a Video-based Game-Understanding Test Procedure in Badminton.

    ERIC Educational Resources Information Center

    Blomqvist, Minna T.; Luhtanen, Pekka; Laakso, Lauri; Keskinen, Esko

    2000-01-01

    Reports the development and validation of video-based game-understanding tests in badminton for elementary and secondary students. The tests included different sequences that simulated actual game situations. Players had to solve tactical problems by selecting appropriate solutions and arguments for their decisions. Results suggest that the test…

  3. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  4. Community-Based Participatory Research Conceptual Model: Community Partner Consultation and Face Validity.

    PubMed

    Belone, Lorenda; Lucero, Julie E; Duran, Bonnie; Tafoya, Greg; Baker, Elizabeth A; Chan, Domin; Chang, Charlotte; Greene-Moton, Ella; Kelley, Michele A; Wallerstein, Nina

    2016-01-01

    A national community-based participatory research (CBPR) team developed a conceptual model of CBPR partnerships to understand the contribution of partnership processes to improved community capacity and health outcomes. With the model primarily developed through academic literature and expert consensus building, we sought community input to assess face validity and acceptability. Our research team conducted semi-structured focus groups with six partnerships nationwide. Participants validated and expanded on existing model constructs and identified new constructs based on "real-world" praxis, resulting in a revised model. Four cross-cutting constructs were identified: trust development, capacity, mutual learning, and power dynamics. By empirically testing the model, we found community face validity and capacity to adapt the model to diverse contexts. We recommend partnerships use and adapt the CBPR model and its constructs, for collective reflection and evaluation, to enhance their partnering practices and achieve their health and research goals. © The Author(s) 2014.

  5. Can We Study Autonomous Driving Comfort in Moving-Base Driving Simulators? A Validation Study.

    PubMed

    Bellem, Hanna; Klüver, Malte; Schrauf, Michael; Schöner, Hans-Peter; Hecht, Heiko; Krems, Josef F

    2017-05-01

    To lay the basis of studying autonomous driving comfort using driving simulators, we assessed the behavioral validity of two moving-base simulator configurations by contrasting them with a test-track setting. With increasing level of automation, driving comfort becomes increasingly important. Simulators provide a safe environment to study perceived comfort in autonomous driving. To date, however, no studies were conducted in relation to comfort in autonomous driving to determine the extent to which results from simulator studies can be transferred to on-road driving conditions. Participants ( N = 72) experienced six differently parameterized lane-change and deceleration maneuvers and subsequently rated the comfort of each scenario. One group of participants experienced the maneuvers on a test-track setting, whereas two other groups experienced them in one of two moving-base simulator configurations. We could demonstrate relative and absolute validity for one of the two simulator configurations. Subsequent analyses revealed that the validity of the simulator highly depends on the parameterization of the motion system. Moving-base simulation can be a useful research tool to study driving comfort in autonomous vehicles. However, our results point at a preference for subunity scaling factors for both lateral and longitudinal motion cues, which might be explained by an underestimation of speed in virtual environments. In line with previous studies, we recommend lateral- and longitudinal-motion scaling factors of approximately 50% to 60% in order to obtain valid results for both active and passive driving tasks.

  6. Validation of the Female Sexual Function Index (FSFI) for web-based administration.

    PubMed

    Crisp, Catrina C; Fellner, Angela N; Pauls, Rachel N

    2015-02-01

    Web-based questionnaires are becoming increasingly valuable for clinical research. The Female Sexual Function Index (FSFI) is the gold standard for evaluating female sexual function; yet, it has not been validated in this format. We sought to validate the Female Sexual Function Index (FSFI) for web-based administration. Subjects enrolled in a web-based research survey of sexual function from the general population were invited to participate in this validation study. The first 151 respondents were included. Validation participants completed the web-based version of the FSFI followed by a mailed paper-based version. Demographic data were collected for all subjects. Scores were compared using the paired t test and the intraclass correlation coefficient. One hundred fifty-one subjects completed both web- and paper-based versions of the FSFI. Those subjects participating in the validation study did not differ in demographics or FSFI scores from the remaining subjects in the general population study. Total web-based and paper-based FSFI scores were not significantly different (mean 20.31 and 20.29 respectively, p = 0.931). The six domains or subscales of the FSFI were similar when comparing web and paper scores. Finally, intraclass correlation analysis revealed a high degree of correlation between total and subscale scores, r = 0.848-0.943, p < 0.001. Web-based administration of the FSFI is a valid alternative to the paper-based version.

  7. Computer aided manual validation of mass spectrometry-based proteomic data.

    PubMed

    Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M

    2013-06-15

    Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Validation of an Innovative Satellite-Based UV Dosimeter

    NASA Astrophysics Data System (ADS)

    Morelli, Marco; Masini, Andrea; Simeone, Emilio; Khazova, Marina

    2016-08-01

    We present an innovative satellite-based UV (ultraviolet) radiation dosimeter with a mobile app interface that has been validated by exploiting both ground-based measurements and an in-vivo assessment of the erythemal effects on some volunteers having a controlled exposure to solar radiation.Both validations showed that the satellite-based UV dosimeter has a good accuracy and reliability needed for health-related applications.The app with this satellite-based UV dosimeter also includes other related functionalities such as the provision of safe sun exposure time updated in real-time and end exposure visual/sound alert. This app will be launched on the global market by siHealth Ltd in May 2016 under the name of "HappySun" and available both for Android and for iOS devices (more info on http://www.happysun.co.uk).Extensive R&D activities are on-going for further improvement of the satellite-based UV dosimeter's accuracy.

  9. Two Validated Ways of Improving the Ability of Decision-Making in Emergencies; Results from a Literature Review

    PubMed Central

    Khorram-Manesh, Amir; Berlin, Johan; Carlström, Eric

    2016-01-01

    The aim of the current review wasto study the existing knowledge about decision-making and to identify and describe validated training tools.A comprehensive literature review was conducted by using the following keywords: decision-making, emergencies, disasters, crisis management, training, exercises, simulation, validated, real-time, command and control, communication, collaboration, and multi-disciplinary in combination or as an isolated word. Two validated training systems developed in Sweden, 3 level collaboration (3LC) and MacSim, were identified and studied in light of the literature review in order to identify how decision-making can be trained. The training models fulfilled six of the eight identified characteristics of training for decision-making.Based on the results, these training models contained methods suitable to train for decision-making. PMID:27878123

  10. Risk-based Methodology for Validation of Pharmaceutical Batch Processes.

    PubMed

    Wiles, Frederick

    2013-01-01

    or runs required to demonstrate that a pharmaceutical process is operating in a validated state should be based on sound statistical principles. The old rule of "three consecutive batches and you're done" is no longer sufficient. The guidance, however, does not provide any specific methodology for determining the number of runs required, and little has been published to augment this shortcoming. The paper titled "Risk-based Methodology for Validation of Pharmaceutical Batch Processes" describes a statistically sound methodology for determining when a statistically valid number of validation runs has been acquired based on risk assessment and calculation of process capability.

  11. Designing and validation of a yoga-based intervention for schizophrenia.

    PubMed

    Govindaraj, Ramajayam; Varambally, Shivarama; Sharma, Manjunath; Gangadhar, Bangalore Nanjundaiah

    2016-06-01

    Schizophrenia is a chronic mental illness which causes significant distress and dysfunction. Yoga has been found to be effective as an add-on therapy in schizophrenia. Modules of yoga used in previous studies were based on individual researcher's experience. This study aimed to develop and validate a specific generic yoga-based intervention module for patients with schizophrenia. The study was conducted at NIMHANS Integrated Centre for Yoga (NICY). A yoga module was designed based on traditional and contemporary yoga literature as well as published studies. The yoga module along with three case vignettes of adult patients with schizophrenia was sent to 10 yoga experts for their validation. Experts (n = 10) gave their opinion on the usefulness of a yoga module for patients with schizophrenia with some modifications. In total, 87% (13 of 15 items) of the items in the initial module were retained, with modification in the remainder as suggested by the experts. A specific yoga-based module for schizophrenia was designed and validated by experts. Further studies are needed to confirm efficacy and clinical utility of the module. Additional clinical validation is suggested.

  12. Validity and Practitality of Acid-Base Module Based on Guided Discovery Learning for Senior High School

    NASA Astrophysics Data System (ADS)

    Yerimadesi; Bayharti; Jannah, S. M.; Lufri; Festiyed; Kiram, Y.

    2018-04-01

    This Research and Development(R&D) aims to produce guided discovery learning based module on topic of acid-base and determine its validity and practicality in learning. Module development used Four D (4-D) model (define, design, develop and disseminate).This research was performed until development stage. Research’s instruments were validity and practicality questionnaires. Module was validated by five experts (three chemistry lecturers of Universitas Negeri Padang and two chemistry teachers of SMAN 9 Padang). Practicality test was done by two chemistry teachers and 30 students of SMAN 9 Padang. Kappa Cohen’s was used to analyze validity and practicality. The average moment kappa was 0.86 for validity and those for practicality were 0.85 by teachers and 0.76 by students revealing high category. It can be concluded that validity and practicality was proven for high school chemistry learning.

  13. Development and validation of a registry-based definition of eosinophilic esophagitis in Denmark

    PubMed Central

    Dellon, Evan S; Erichsen, Rune; Pedersen, Lars; Shaheen, Nicholas J; Baron, John A; Sørensen, Henrik T; Vyberg, Mogens

    2013-01-01

    AIM: To develop and validate a case definition of eosinophilic esophagitis (EoE) in the linked Danish health registries. METHODS: For case definition development, we queried the Danish medical registries from 2006-2007 to identify candidate cases of EoE in Northern Denmark. All International Classification of Diseases-10 (ICD-10) and prescription codes were obtained, and archived pathology slides were obtained and re-reviewed to determine case status. We used an iterative process to select inclusion/exclusion codes, refine the case definition, and optimize sensitivity and specificity. We then re-queried the registries from 2008-2009 to yield a validation set. The case definition algorithm was applied, and sensitivity and specificity were calculated. RESULTS: Of the 51 and 49 candidate cases identified in both the development and validation sets, 21 and 24 had EoE, respectively. Characteristics of EoE cases in the development set [mean age 35 years; 76% male; 86% dysphagia; 103 eosinophils per high-power field (eos/hpf)] were similar to those in the validation set (mean age 42 years; 83% male; 67% dysphagia; 77 eos/hpf). Re-review of archived slides confirmed that the pathology coding for esophageal eosinophilia was correct in greater than 90% of cases. Two registry-based case algorithms based on pathology, ICD-10, and pharmacy codes were successfully generated in the development set, one that was sensitive (90%) and one that was specific (97%). When these algorithms were applied to the validation set, they remained sensitive (88%) and specific (96%). CONCLUSION: Two registry-based definitions, one highly sensitive and one highly specific, were developed and validated for the linked Danish national health databases, making future population-based studies feasible. PMID:23382628

  14. Determining the Scoring Validity of a Co-Constructed CEFR-Based Rating Scale

    ERIC Educational Resources Information Center

    Deygers, Bart; Van Gorp, Koen

    2015-01-01

    Considering scoring validity as encompassing both reliable rating scale use and valid descriptor interpretation, this study reports on the validation of a CEFR-based scale that was co-constructed and used by novice raters. The research questions this paper wishes to answer are (a) whether it is possible to construct a CEFR-based rating scale with…

  15. Latency-Based and Psychophysiological Measures of Sexual Interest Show Convergent and Concurrent Validity.

    PubMed

    Ó Ciardha, Caoilte; Attard-Johnson, Janice; Bindemann, Markus

    2018-04-01

    Latency-based measures of sexual interest require additional evidence of validity, as do newer pupil dilation approaches. A total of 102 community men completed six latency-based measures of sexual interest. Pupillary responses were recorded during three of these tasks and in an additional task where no participant response was required. For adult stimuli, there was a high degree of intercorrelation between measures, suggesting that tasks may be measuring the same underlying construct (convergent validity). In addition to being correlated with one another, measures also predicted participants' self-reported sexual interest, demonstrating concurrent validity (i.e., the ability of a task to predict a more validated, simultaneously recorded, measure). Latency-based and pupillometric approaches also showed preliminary evidence of concurrent validity in predicting both self-reported interest in child molestation and viewing pornographic material containing children. Taken together, the study findings build on the evidence base for the validity of latency-based and pupillometric measures of sexual interest.

  16. Examinee Noneffort and the Validity of Program Assessment Results

    ERIC Educational Resources Information Center

    Wise, Steven L.; DeMars, Christine E.

    2010-01-01

    Educational program assessment studies often use data from low-stakes tests to provide evidence of program quality. The validity of scores from such tests, however, is potentially threatened by examinee noneffort. This study investigated the extent to which one type of noneffort--rapid-guessing behavior--distorted the results from three types of…

  17. Validation Test Results for Orthogonal Probe Eddy Current Thruster Inspection System

    NASA Technical Reports Server (NTRS)

    Wincheski, Russell A.

    2007-01-01

    Recent nondestructive evaluation efforts within NASA have focused on an inspection system for the detection of intergranular cracking originating in the relief radius of Primary Reaction Control System (PCRS) Thrusters. Of particular concern is deep cracking in this area which could lead to combustion leakage in the event of through wall cracking from the relief radius into an acoustic cavity of the combustion chamber. In order to reliably detect such defects while ensuring minimal false positives during inspection, the Orthogonal Probe Eddy Current (OPEC) system has been developed and an extensive validation study performed. This report describes the validation procedure, sample set, and inspection results as well as comparing validation flaws with the response from naturally occuring damage.

  18. Ride qualities criteria validation/pilot performance study: Flight test results

    NASA Technical Reports Server (NTRS)

    Nardi, L. U.; Kawana, H. Y.; Greek, D. C.

    1979-01-01

    Pilot performance during a terrain following flight was studied for ride quality criteria validation. Data from manual and automatic terrain following operations conducted during low level penetrations were analyzed to determine the effect of ride qualities on crew performance. The conditions analyzed included varying levels of turbulence, terrain roughness, and mission duration with a ride smoothing system on and off. Limited validation of the B-1 ride quality criteria and some of the first order interactions between ride qualities and pilot/vehicle performance are highlighted. An earlier B-1 flight simulation program correlated well with the flight test results.

  19. Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight

    NASA Technical Reports Server (NTRS)

    Suorsa, Raymond; Sridhar, Banavar

    1991-01-01

    A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.

  20. Multiple Imputation based Clustering Validation (MIV) for Big Longitudinal Trial Data with Missing Values in eHealth.

    PubMed

    Zhang, Zhaoyang; Fang, Hua; Wang, Honggang

    2016-06-01

    Web-delivered trials are an important component in eHealth services. These trials, mostly behavior-based, generate big heterogeneous data that are longitudinal, high dimensional with missing values. Unsupervised learning methods have been widely applied in this area, however, validating the optimal number of clusters has been challenging. Built upon our multiple imputation (MI) based fuzzy clustering, MIfuzzy, we proposed a new multiple imputation based validation (MIV) framework and corresponding MIV algorithms for clustering big longitudinal eHealth data with missing values, more generally for fuzzy-logic based clustering methods. Specifically, we detect the optimal number of clusters by auto-searching and -synthesizing a suite of MI-based validation methods and indices, including conventional (bootstrap or cross-validation based) and emerging (modularity-based) validation indices for general clustering methods as well as the specific one (Xie and Beni) for fuzzy clustering. The MIV performance was demonstrated on a big longitudinal dataset from a real web-delivered trial and using simulation. The results indicate MI-based Xie and Beni index for fuzzy-clustering are more appropriate for detecting the optimal number of clusters for such complex data. The MIV concept and algorithms could be easily adapted to different types of clustering that could process big incomplete longitudinal trial data in eHealth services.

  1. Multiple Imputation based Clustering Validation (MIV) for Big Longitudinal Trial Data with Missing Values in eHealth

    PubMed Central

    Zhang, Zhaoyang; Wang, Honggang

    2016-01-01

    Web-delivered trials are an important component in eHealth services. These trials, mostly behavior-based, generate big heterogeneous data that are longitudinal, high dimensional with missing values. Unsupervised learning methods have been widely applied in this area, however, validating the optimal number of clusters has been challenging. Built upon our multiple imputation (MI) based fuzzy clustering, MIfuzzy, we proposed a new multiple imputation based validation (MIV) framework and corresponding MIV algorithms for clustering big longitudinal eHealth data with missing values, more generally for fuzzy-logic based clustering methods. Specifically, we detect the optimal number of clusters by auto-searching and -synthesizing a suite of MI-based validation methods and indices, including conventional (bootstrap or cross-validation based) and emerging (modularity-based) validation indices for general clustering methods as well as the specific one (Xie and Beni) for fuzzy clustering. The MIV performance was demonstrated on a big longitudinal dataset from a real web-delivered trial and using simulation. The results indicate MI-based Xie and Beni index for fuzzy-clustering is more appropriate for detecting the optimal number of clusters for such complex data. The MIV concept and algorithms could be easily adapted to different types of clustering that could process big incomplete longitudinal trial data in eHealth services. PMID:27126063

  2. V-SUIT Model Validation Using PLSS 1.0 Test Results

    NASA Technical Reports Server (NTRS)

    Olthoff, Claas

    2015-01-01

    The dynamic portable life support system (PLSS) simulation software Virtual Space Suit (V-SUIT) has been under development at the Technische Universitat Munchen since 2011 as a spin-off from the Virtual Habitat (V-HAB) project. The MATLAB(trademark)-based V-SUIT simulates space suit portable life support systems and their interaction with a detailed and also dynamic human model, as well as the dynamic external environment of a space suit moving on a planetary surface. To demonstrate the feasibility of a large, system level simulation like V-SUIT, a model of NASA's PLSS 1.0 prototype was created. This prototype was run through an extensive series of tests in 2011. Since the test setup was heavily instrumented, it produced a wealth of data making it ideal for model validation. The implemented model includes all components of the PLSS in both the ventilation and thermal loops. The major components are modeled in greater detail, while smaller and ancillary components are low fidelity black box models. The major components include the Rapid Cycle Amine (RCA) CO2 removal system, the Primary and Secondary Oxygen Assembly (POS/SOA), the Pressure Garment System Volume Simulator (PGSVS), the Human Metabolic Simulator (HMS), the heat exchanger between the ventilation and thermal loops, the Space Suit Water Membrane Evaporator (SWME) and finally the Liquid Cooling Garment Simulator (LCGS). Using the created model, dynamic simulations were performed using same test points also used during PLSS 1.0 testing. The results of the simulation were then compared to the test data with special focus on absolute values during the steady state phases and dynamic behavior during the transition between test points. Quantified simulation results are presented that demonstrate which areas of the V-SUIT model are in need of further refinement and those that are sufficiently close to the test results. Finally, lessons learned from the modelling and validation process are given in combination

  3. Validation of voxel-based morphometry (VBM) based on MRI

    NASA Astrophysics Data System (ADS)

    Yang, Xueyu; Chen, Kewei; Guo, Xiaojuan; Yao, Li

    2007-03-01

    Voxel-based morphometry (VBM) is an automated and objective image analysis technique for detecting differences in regional concentration or volume of brain tissue composition based on structural magnetic resonance (MR) images. VBM has been used widely to evaluate brain morphometric differences between different populations, but there isn't an evaluation system for its validation until now. In this study, a quantitative and objective evaluation system was established in order to assess VBM performance. We recruited twenty normal volunteers (10 males and 10 females, age range 20-26 years, mean age 22.6 years). Firstly, several focal lesions (hippocampus, frontal lobe, anterior cingulate, back of hippocampus, back of anterior cingulate) were simulated in selected brain regions using real MRI data. Secondly, optimized VBM was performed to detect structural differences between groups. Thirdly, one-way ANOVA and post-hoc test were used to assess the accuracy and sensitivity of VBM analysis. The results revealed that VBM was a good detective tool in majority of brain regions, even in controversial brain region such as hippocampus in VBM study. Generally speaking, much more severity of focal lesion was, better VBM performance was. However size of focal lesion had little effects on VBM analysis.

  4. Web Based Semi-automatic Scientific Validation of Models of the Corona and Inner Heliosphere

    NASA Astrophysics Data System (ADS)

    MacNeice, P. J.; Chulaki, A.; Taktakishvili, A.; Kuznetsova, M. M.

    2013-12-01

    Validation is a critical step in preparing models of the corona and inner heliosphere for future roles supporting either or both the scientific research community and the operational space weather forecasting community. Validation of forecasting quality tends to focus on a short list of key features in the model solutions, with an unchanging order of priority. Scientific validation exposes a much larger range of physical processes and features, and as the models evolve to better represent features of interest, the research community tends to shift its focus to other areas which are less well understood and modeled. Given the more comprehensive and dynamic nature of scientific validation, and the limited resources available to the community to pursue this, it is imperative that the community establish a semi-automated process which engages the model developers directly into an ongoing and evolving validation process. In this presentation we describe the ongoing design and develpment of a web based facility to enable this type of validation of models of the corona and inner heliosphere, on the growing list of model results being generated, and on strategies we have been developing to account for model results that incorporate adaptively refined numerical grids.

  5. Online cross-validation-based ensemble learning.

    PubMed

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2018-01-30

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Online Cross-Validation-Based Ensemble Learning

    PubMed Central

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2017-01-01

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. PMID:28474419

  7. Validity of "Hi_Science" as instructional media based-android refer to experiential learning model

    NASA Astrophysics Data System (ADS)

    Qamariah, Jumadi, Senam, Wilujeng, Insih

    2017-08-01

    Hi_Science is instructional media based-android in learning science on material environmental pollution and global warming. This study is aimed: (a) to show the display of Hi_Science that will be applied in Junior High School, and (b) to describe the validity of Hi_Science. Hi_Science as instructional media created with colaboration of innovative learning model and development of technology at the current time. Learning media selected is based-android and collaborated with experiential learning model as an innovative learning model. Hi_Science had adapted student worksheet by Taufiq (2015). Student worksheet had very good category by two expert lecturers and two science teachers (Taufik, 2015). This student worksheet is refined and redeveloped in android as an instructional media which can be used by students for learning science not only in the classroom, but also at home. Therefore, student worksheet which has become instructional media based-android must be validated again. Hi_Science has been validated by two experts. The validation is based on assessment of meterials aspects and media aspects. The data collection was done by media assessment instrument. The result showed the assessment of material aspects has obtained the average value 4,72 with percentage of agreement 96,47%, that means Hi_Science on the material aspects is in excellent category or very valid category. The assessment of media aspects has obtained the average value 4,53 with percentage of agreement 98,70%, that means Hi_Science on the media aspects is in excellent category or very valid category. It was concluded that Hi_Science as instructional media can be applied in the junior high school.

  8. Development and Validation of a Multimedia-based Assessment of Scientific Inquiry Abilities

    NASA Astrophysics Data System (ADS)

    Kuo, Che-Yu; Wu, Hsin-Kai; Jen, Tsung-Hau; Hsu, Ying-Shao

    2015-09-01

    The potential of computer-based assessments for capturing complex learning outcomes has been discussed; however, relatively little is understood about how to leverage such potential for summative and accountability purposes. The aim of this study is to develop and validate a multimedia-based assessment of scientific inquiry abilities (MASIA) to cover a more comprehensive construct of inquiry abilities and target secondary school students in different grades while this potential is leveraged. We implemented five steps derived from the construct modeling approach to design MASIA. During the implementation, multiple sources of evidence were collected in the steps of pilot testing and Rasch modeling to support the validity of MASIA. Particularly, through the participation of 1,066 8th and 11th graders, MASIA showed satisfactory psychometric properties to discriminate students with different levels of inquiry abilities in 101 items in 29 tasks when Rasch models were applied. Additionally, the Wright map indicated that MASIA offered accurate information about students' inquiry abilities because of the comparability of the distributions of student abilities and item difficulties. The analysis results also suggested that MASIA offered precise measures of inquiry abilities when the components (questioning, experimenting, analyzing, and explaining) were regarded as a coherent construct. Finally, the increased mean difficulty thresholds of item responses along with three performance levels across all sub-abilities supported the alignment between our scoring rubrics and our inquiry framework. Together with other sources of validity in the pilot testing, the results offered evidence to support the validity of MASIA.

  9. Examining construct and predictive validity of the Health-IT Usability Evaluation Scale: confirmatory factor analysis and structural equation modeling results

    PubMed Central

    Yen, Po-Yin; Sousa, Karen H; Bakken, Suzanne

    2014-01-01

    Background In a previous study, we developed the Health Information Technology Usability Evaluation Scale (Health-ITUES), which is designed to support customization at the item level. Such customization matches the specific tasks/expectations of a health IT system while retaining comparability at the construct level, and provides evidence of its factorial validity and internal consistency reliability through exploratory factor analysis. Objective In this study, we advanced the development of Health-ITUES to examine its construct validity and predictive validity. Methods The health IT system studied was a web-based communication system that supported nurse staffing and scheduling. Using Health-ITUES, we conducted a cross-sectional study to evaluate users’ perception toward the web-based communication system after system implementation. We examined Health-ITUES's construct validity through first and second order confirmatory factor analysis (CFA), and its predictive validity via structural equation modeling (SEM). Results The sample comprised 541 staff nurses in two healthcare organizations. The CFA (n=165) showed that a general usability factor accounted for 78.1%, 93.4%, 51.0%, and 39.9% of the explained variance in ‘Quality of Work Life’, ‘Perceived Usefulness’, ‘Perceived Ease of Use’, and ‘User Control’, respectively. The SEM (n=541) supported the predictive validity of Health-ITUES, explaining 64% of the variance in intention for system use. Conclusions The results of CFA and SEM provide additional evidence for the construct and predictive validity of Health-ITUES. The customizability of Health-ITUES has the potential to support comparisons at the construct level, while allowing variation at the item level. We also illustrate application of Health-ITUES across stages of system development. PMID:24567081

  10. Contemporary Test Validity in Theory and Practice: A Primer for Discipline-Based Education Researchers

    PubMed Central

    Reeves, Todd D.; Marbach-Ad, Gili

    2016-01-01

    Most discipline-based education researchers (DBERs) were formally trained in the methods of scientific disciplines such as biology, chemistry, and physics, rather than social science disciplines such as psychology and education. As a result, DBERs may have never taken specific courses in the social science research methodology—either quantitative or qualitative—on which their scholarship often relies so heavily. One particular aspect of (quantitative) social science research that differs markedly from disciplines such as biology and chemistry is the instrumentation used to quantify phenomena. In response, this Research Methods essay offers a contemporary social science perspective on test validity and the validation process. The instructional piece explores the concepts of test validity, the validation process, validity evidence, and key threats to validity. The essay also includes an in-depth example of a validity argument and validation approach for a test of student argument analysis. In addition to DBERs, this essay should benefit practitioners (e.g., lab directors, faculty members) in the development, evaluation, and/or selection of instruments for their work assessing students or evaluating pedagogical innovations. PMID:26903498

  11. Reliability and Validity of Assessing User Satisfaction With Web-Based Health Interventions

    PubMed Central

    Lehr, Dirk; Reis, Dorota; Vis, Christiaan; Riper, Heleen; Berking, Matthias; Ebert, David Daniel

    2016-01-01

    Background The perspective of users should be taken into account in the evaluation of Web-based health interventions. Assessing the users’ satisfaction with the intervention they receive could enhance the evidence for the intervention effects. Thus, there is a need for valid and reliable measures to assess satisfaction with Web-based health interventions. Objective The objective of this study was to analyze the reliability, factorial structure, and construct validity of the Client Satisfaction Questionnaire adapted to Internet-based interventions (CSQ-I). Methods The psychometric quality of the CSQ-I was analyzed in user samples from 2 separate randomized controlled trials evaluating Web-based health interventions, one from a depression prevention intervention (sample 1, N=174) and the other from a stress management intervention (sample 2, N=111). At first, the underlying measurement model of the CSQ-I was analyzed to determine the internal consistency. The factorial structure of the scale and the measurement invariance across groups were tested by multigroup confirmatory factor analyses. Additionally, the construct validity of the scale was examined by comparing satisfaction scores with the primary clinical outcome. Results Multigroup confirmatory analyses on the scale yielded a one-factorial structure with a good fit (root-mean-square error of approximation =.09, comparative fit index =.96, standardized root-mean-square residual =.05) that showed partial strong invariance across the 2 samples. The scale showed very good reliability, indicated by McDonald omegas of .95 in sample 1 and .93 in sample 2. Significant correlations with change in depressive symptoms (r=−.35, P<.001) and perceived stress (r=−.48, P<.001) demonstrated the construct validity of the scale. Conclusions The proven internal consistency, factorial structure, and construct validity of the CSQ-I indicate a good overall psychometric quality of the measure to assess the user’s general

  12. Meta-Analysis of Criterion Validity for Curriculum-Based Measurement in Written Language

    ERIC Educational Resources Information Center

    Romig, John Elwood; Therrien, William J.; Lloyd, John W.

    2017-01-01

    We used meta-analysis to examine the criterion validity of four scoring procedures used in curriculum-based measurement of written language. A total of 22 articles representing 21 studies (N = 21) met the inclusion criteria. Results indicated that two scoring procedures, correct word sequences and correct minus incorrect sequences, have acceptable…

  13. 42 CFR 493.569 - Consequences of a finding of noncompliance as a result of a validation inspection.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... result of a validation inspection. 493.569 Section 493.569 Public Health CENTERS FOR MEDICARE & MEDICAID... validation inspection. (a) Laboratory with a certificate of accreditation. If a validation inspection results... validation inspection results in a finding that a CLIA-exempt laboratory is out of compliance with one or...

  14. 42 CFR 493.569 - Consequences of a finding of noncompliance as a result of a validation inspection.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... result of a validation inspection. 493.569 Section 493.569 Public Health CENTERS FOR MEDICARE & MEDICAID... validation inspection. (a) Laboratory with a certificate of accreditation. If a validation inspection results... validation inspection results in a finding that a CLIA-exempt laboratory is out of compliance with one or...

  15. 42 CFR 493.569 - Consequences of a finding of noncompliance as a result of a validation inspection.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... result of a validation inspection. 493.569 Section 493.569 Public Health CENTERS FOR MEDICARE & MEDICAID... validation inspection. (a) Laboratory with a certificate of accreditation. If a validation inspection results... validation inspection results in a finding that a CLIA-exempt laboratory is out of compliance with one or...

  16. 42 CFR 493.569 - Consequences of a finding of noncompliance as a result of a validation inspection.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... result of a validation inspection. 493.569 Section 493.569 Public Health CENTERS FOR MEDICARE & MEDICAID... validation inspection. (a) Laboratory with a certificate of accreditation. If a validation inspection results... validation inspection results in a finding that a CLIA-exempt laboratory is out of compliance with one or...

  17. 42 CFR 493.569 - Consequences of a finding of noncompliance as a result of a validation inspection.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... result of a validation inspection. 493.569 Section 493.569 Public Health CENTERS FOR MEDICARE & MEDICAID... validation inspection. (a) Laboratory with a certificate of accreditation. If a validation inspection results... validation inspection results in a finding that a CLIA-exempt laboratory is out of compliance with one or...

  18. Inhibitor-based validation of a homology model of the active-site of tripeptidyl peptidase II.

    PubMed

    De Winter, Hans; Breslin, Henry; Miskowski, Tamara; Kavash, Robert; Somers, Marijke

    2005-04-01

    A homology model of the active site region of tripeptidyl peptidase II (TPP II) was constructed based on the crystal structures of four subtilisin-like templates. The resulting model was subsequently validated by judging expectations of the model versus observed activities for a broad set of prepared TPP II inhibitors. The structure-activity relationships observed for the prepared TPP II inhibitors correlated nicely with the structural details of the TPP II active site model, supporting the validity of this model and its usefulness for structure-based drug design and pharmacophore searching experiments.

  19. Non-Nuclear Validation Test Results of a Closed Brayton Cycle Test-Loop

    NASA Astrophysics Data System (ADS)

    Wright, Steven A.

    2007-01-01

    Both NASA and DOE have programs that are investigating advanced power conversion cycles for planetary surface power on the moon or Mars, or for next generation nuclear power plants on earth. Although open Brayton cycles are in use for many applications (combined cycle power plants, aircraft engines), only a few closed Brayton cycles have been tested. Experience with closed Brayton cycles coupled to nuclear reactors is even more limited and current projections of Brayton cycle performance are based on analytic models. This report describes and compares experimental results with model predictions from a series of non-nuclear tests using a small scale closed loop Brayton cycle available at Sandia National Laboratories. A substantial amount of testing has been performed, and the information is being used to help validate models. In this report we summarize the results from three kinds of tests. These tests include: 1) test results that are useful for validating the characteristic flow curves of the turbomachinery for various gases ranging from ideal gases (Ar or Ar/He) to non-ideal gases such as CO2, 2) test results that represent shut down transients and decay heat removal capability of Brayton loops after reactor shut down, and 3) tests that map a range of operating power versus shaft speed curve and turbine inlet temperature that are useful for predicting stable operating conditions during both normal and off-normal operating behavior. These tests reveal significant interactions between the reactor and balance of plant. Specifically these results predict limited speed up behavior of the turbomachinery caused by loss of load, the conditions for stable operation, and for direct cooled reactors, the tests reveal that the coast down behavior during loss of power events can extend for hours provided the ultimate heat sink remains available.

  20. An improved procedure for the validation of satellite-based precipitation estimates

    NASA Astrophysics Data System (ADS)

    Tang, Ling; Tian, Yudong; Yan, Fang; Habib, Emad

    2015-09-01

    The objective of this study is to propose and test a new procedure to improve the validation of remote-sensing, high-resolution precipitation estimates. Our recent studies show that many conventional validation measures do not accurately capture the unique error characteristics in precipitation estimates to better inform both data producers and users. The proposed new validation procedure has two steps: 1) an error decomposition approach to separate the total retrieval error into three independent components: hit error, false precipitation and missed precipitation; and 2) the hit error is further analyzed based on a multiplicative error model. In the multiplicative error model, the error features are captured by three model parameters. In this way, the multiplicative error model separates systematic and random errors, leading to more accurate quantification of the uncertainties. The proposed procedure is used to quantitatively evaluate the recent two versions (Version 6 and 7) of TRMM's Multi-sensor Precipitation Analysis (TMPA) real-time and research product suite (3B42 and 3B42RT) for seven years (2005-2011) over the continental United States (CONUS). The gauge-based National Centers for Environmental Prediction (NCEP) Climate Prediction Center (CPC) near-real-time daily precipitation analysis is used as the reference. In addition, the radar-based NCEP Stage IV precipitation data are also model-fitted to verify the effectiveness of the multiplicative error model. The results show that winter total bias is dominated by the missed precipitation over the west coastal areas and the Rocky Mountains, and the false precipitation over large areas in Midwest. The summer total bias is largely coming from the hit bias in Central US. Meanwhile, the new version (V7) tends to produce more rainfall in the higher rain rates, which moderates the significant underestimation exhibited in the previous V6 products. Moreover, the error analysis from the multiplicative error model

  1. Double Cross-Validation in Multiple Regression: A Method of Estimating the Stability of Results.

    ERIC Educational Resources Information Center

    Rowell, R. Kevin

    In multiple regression analysis, where resulting predictive equation effectiveness is subject to shrinkage, it is especially important to evaluate result replicability. Double cross-validation is an empirical method by which an estimate of invariance or stability can be obtained from research data. A procedure for double cross-validation is…

  2. The East London glaucoma prediction score: web-based validation of glaucoma risk screening tool

    PubMed Central

    Stephen, Cook; Benjamin, Longo-Mbenza

    2013-01-01

    AIM It is difficult for Optometrists and General Practitioners to know which patients are at risk. The East London glaucoma prediction score (ELGPS) is a web based risk calculator that has been developed to determine Glaucoma risk at the time of screening. Multiple risk factors that are available in a low tech environment are assessed to provide a risk assessment. This is extremely useful in settings where access to specialist care is difficult. Use of the calculator is educational. It is a free web based service. Data capture is user specific. METHOD The scoring system is a web based questionnaire that captures and subsequently calculates the relative risk for the presence of Glaucoma at the time of screening. Three categories of patient are described: Unlikely to have Glaucoma; Glaucoma Suspect and Glaucoma. A case review methodology of patients with known diagnosis is employed to validate the calculator risk assessment. RESULTS Data from the patient records of 400 patients with an established diagnosis has been captured and used to validate the screening tool. The website reports that the calculated diagnosis correlates with the actual diagnosis 82% of the time. Biostatistics analysis showed: Sensitivity = 88%; Positive predictive value = 97%; Specificity = 75%. CONCLUSION Analysis of the first 400 patients validates the web based screening tool as being a good method of screening for the at risk population. The validation is ongoing. The web based format will allow a more widespread recruitment for different geographic, population and personnel variables. PMID:23550097

  3. Validation Results for LEWICE 2.0. [Supplement

    NASA Technical Reports Server (NTRS)

    Wright, William B.; Rutkowski, Adam

    1999-01-01

    Two CD-ROMs contain experimental ice shapes and code prediction used for validation of LEWICE 2.0 (see NASA/CR-1999-208690, CASI ID 19990021235). The data include ice shapes for both experiment and for LEWICE, all of the input and output files for the LEWICE cases, JPG files of all plots generated, an electronic copy of the text of the validation report, and a Microsoft Excel(R) spreadsheet containing all of the quantitative measurements taken. The LEWICE source code and executable are not contained on the discs.

  4. Validity of worksheet-based guided inquiry and mind mapping for training students’ creative thinking skills

    NASA Astrophysics Data System (ADS)

    Susanti, L. B.; Poedjiastoeti, S.; Taufikurohmah, T.

    2018-04-01

    The purpose of this study is to explain the validity of guided inquiry and mind mapping-based worksheet that has been developed in this study. The worksheet implemented the phases of guided inquiry teaching models in order to train students’ creative thinking skills. The creative thinking skills which were trained in this study included fluency, flexibility, originality and elaboration. The types of validity used in this study included content and construct validity. The type of this study is development research with Research and Development (R & D) method. The data of this study were collected using review and validation sheets. Sources of the data were chemistry lecturer and teacher. The data is the analyzed descriptively. The results showed that the worksheet is very valid and could be used as a learning media with the percentage of validity ranged from 82.5%-92.5%.

  5. JaCVAM-organized international validation study of the in vivo rodent alkaline comet assay for detection of genotoxic carcinogens: II. Summary of definitive validation study results.

    PubMed

    Uno, Yoshifumi; Kojima, Hajime; Omori, Takashi; Corvi, Raffaella; Honma, Masamistu; Schechtman, Leonard M; Tice, Raymond R; Beevers, Carol; De Boeck, Marlies; Burlinson, Brian; Hobbs, Cheryl A; Kitamoto, Sachiko; Kraynak, Andrew R; McNamee, James; Nakagawa, Yuzuki; Pant, Kamala; Plappert-Helbig, Ulla; Priestley, Catherine; Takasawa, Hironao; Wada, Kunio; Wirnitzer, Uta; Asano, Norihide; Escobar, Patricia A; Lovell, David; Morita, Takeshi; Nakajima, Madoka; Ohno, Yasuo; Hayashi, Makoto

    2015-07-01

    The in vivo rodent alkaline comet assay (comet assay) is used internationally to investigate the in vivo genotoxic potential of test chemicals. This assay, however, has not previously been formally validated. The Japanese Center for the Validation of Alternative Methods (JaCVAM), with the cooperation of the U.S. NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM)/the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), the European Centre for the Validation of Alternative Methods (ECVAM), and the Japanese Environmental Mutagen Society/Mammalian Mutagenesis Study Group (JEMS/MMS), organized an international validation study to evaluate the reliability and relevance of the assay for identifying genotoxic carcinogens, using liver and stomach as target organs. The ultimate goal of this exercise was to establish an Organisation for Economic Co-operation and Development (OECD) test guideline. The study protocol was optimized in the pre-validation studies, and then the definitive (4th phase) validation study was conducted in two steps. In the 1st step, assay reproducibility was confirmed among laboratories using four coded reference chemicals and the positive control ethyl methanesulfonate. In the 2nd step, the predictive capability was investigated using 40 coded chemicals with known genotoxic and carcinogenic activity (i.e., genotoxic carcinogens, genotoxic non-carcinogens, non-genotoxic carcinogens, and non-genotoxic non-carcinogens). Based on the results obtained, the in vivo comet assay is concluded to be highly capable of identifying genotoxic chemicals and therefore can serve as a reliable predictor of rodent carcinogenicity. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Validation of a Plasma-Based Comprehensive Cancer Genotyping Assay Utilizing Orthogonal Tissue- and Plasma-Based Methodologies.

    PubMed

    Odegaard, Justin I; Vincent, John J; Mortimer, Stefanie; Vowles, James V; Ulrich, Bryan C; Banks, Kimberly C; Fairclough, Stephen R; Zill, Oliver A; Sikora, Marcin; Mokhtari, Reza; Abdueva, Diana; Nagy, Rebecca J; Lee, Christine E; Kiedrowski, Lesli A; Paweletz, Cloud P; Eltoukhy, Helmy; Lanman, Richard B; Chudova, Darya I; Talasaz, AmirAli

    2018-04-24

    Purpose: To analytically and clinically validate a circulating cell-free tumor DNA sequencing test for comprehensive tumor genotyping and demonstrate its clinical feasibility. Experimental Design: Analytic validation was conducted according to established principles and guidelines. Blood-to-blood clinical validation comprised blinded external comparison with clinical droplet digital PCR across 222 consecutive biomarker-positive clinical samples. Blood-to-tissue clinical validation comprised comparison of digital sequencing calls to those documented in the medical record of 543 consecutive lung cancer patients. Clinical experience was reported from 10,593 consecutive clinical samples. Results: Digital sequencing technology enabled variant detection down to 0.02% to 0.04% allelic fraction/2.12 copies with ≤0.3%/2.24-2.76 copies 95% limits of detection while maintaining high specificity [prevalence-adjusted positive predictive values (PPV) >98%]. Clinical validation using orthogonal plasma- and tissue-based clinical genotyping across >750 patients demonstrated high accuracy and specificity [positive percent agreement (PPAs) and negative percent agreement (NPAs) >99% and PPVs 92%-100%]. Clinical use in 10,593 advanced adult solid tumor patients demonstrated high feasibility (>99.6% technical success rate) and clinical sensitivity (85.9%), with high potential actionability (16.7% with FDA-approved on-label treatment options; 72.0% with treatment or trial recommendations), particularly in non-small cell lung cancer, where 34.5% of patient samples comprised a directly targetable standard-of-care biomarker. Conclusions: High concordance with orthogonal clinical plasma- and tissue-based genotyping methods supports the clinical accuracy of digital sequencing across all four types of targetable genomic alterations. Digital sequencing's clinical applicability is further supported by high rates of technical success and biomarker target discovery. Clin Cancer Res; 1-11. ©2018

  7. Derivation and Cross-Validation of Cutoff Scores for Patients With Schizophrenia Spectrum Disorders on WAIS-IV Digit Span-Based Performance Validity Measures.

    PubMed

    Glassmire, David M; Toofanian Ross, Parnian; Kinney, Dominique I; Nitch, Stephen R

    2016-06-01

    Two studies were conducted to identify and cross-validate cutoff scores on the Wechsler Adult Intelligence Scale-Fourth Edition Digit Span-based embedded performance validity (PV) measures for individuals with schizophrenia spectrum disorders. In Study 1, normative scores were identified on Digit Span-embedded PV measures among a sample of patients (n = 84) with schizophrenia spectrum diagnoses who had no known incentive to perform poorly and who put forth valid effort on external PV tests. Previously identified cutoff scores resulted in unacceptable false positive rates and lower cutoff scores were adopted to maintain specificity levels ≥90%. In Study 2, the revised cutoff scores were cross-validated within a sample of schizophrenia spectrum patients (n = 96) committed as incompetent to stand trial. Performance on Digit Span PV measures was significantly related to Full Scale IQ in both studies, indicating the need to consider the intellectual functioning of examinees with psychotic spectrum disorders when interpreting scores on Digit Span PV measures. © The Author(s) 2015.

  8. Analyzing the Validity of Relationship Banking through Agent-based Modeling

    NASA Astrophysics Data System (ADS)

    Nishikido, Yukihito; Takahashi, Hiroshi

    This article analyzes the validity of relationship banking through agent-based modeling. In the analysis, we especially focus on the relationship between economic conditions and both lenders' and borrowers' behaviors. As a result of intensive experiments, we made the following interesting findings: (1) Relationship banking contributes to reducing bad loan; (2) relationship banking is more effective in enhancing the market growth compared to transaction banking, when borrowers' sales scale is large; (3) keener competition among lenders may bring inefficiency to the market.

  9. Autonomous formation flying based on GPS — PRISMA flight results

    NASA Astrophysics Data System (ADS)

    D'Amico, Simone; Ardaens, Jean-Sebastien; De Florio, Sergio

    2013-01-01

    This paper presents flight results from the early harvest of the Spaceborne Autonomous Formation Flying Experiment (SAFE) conducted in the frame of the Swedish PRISMA technology demonstration mission. SAFE represents one of the first demonstrations in low Earth orbit of an advanced guidance, navigation and control system for dual-spacecraft formations. Innovative techniques based on differential GPS-based navigation and relative orbital elements control are validated and tuned in orbit to fulfill the typical requirements of future distributed scientific instruments for remote sensing.

  10. Validity and validation of expert (Q)SAR systems.

    PubMed

    Hulzebos, E; Sijm, D; Traas, T; Posthumus, R; Maslankiewicz, L

    2005-08-01

    At a recent workshop in Setubal (Portugal) principles were drafted to assess the suitability of (quantitative) structure-activity relationships ((Q)SARs) for assessing the hazards and risks of chemicals. In the present study we applied some of the Setubal principles to test the validity of three (Q)SAR expert systems and validate the results. These principles include a mechanistic basis, the availability of a training set and validation. ECOSAR, BIOWIN and DEREK for Windows have a mechanistic or empirical basis. ECOSAR has a training set for each QSAR. For half of the structural fragments the number of chemicals in the training set is >4. Based on structural fragments and log Kow, ECOSAR uses linear regression to predict ecotoxicity. Validating ECOSAR for three 'valid' classes results in predictivity of > or = 64%. BIOWIN uses (non-)linear regressions to predict the probability of biodegradability based on fragments and molecular weight. It has a large training set and predicts non-ready biodegradability well. DEREK for Windows predictions are supported by a mechanistic rationale and literature references. The structural alerts in this program have been developed with a training set of positive and negative toxicity data. However, to support the prediction only a limited number of chemicals in the training set is presented to the user. DEREK for Windows predicts effects by 'if-then' reasoning. The program predicts best for mutagenicity and carcinogenicity. Each structural fragment in ECOSAR and DEREK for Windows needs to be evaluated and validated separately.

  11. Validating Facial Aesthetic Surgery Results with the FACE-Q.

    PubMed

    Kappos, Elisabeth A; Temp, Mathias; Schaefer, Dirk J; Haug, Martin; Kalbermatten, Daniel F; Toth, Bryant A

    2017-04-01

    In aesthetic clinical practice, surgical outcome is best measured by patient satisfaction and quality of life. For many years, there has been a lack of validated questionnaires. Recently, the FACE-Q was introduced, and the authors present the largest series of face-lift patients evaluated by the FACE-Q with the longest follow-up to date. Two hundred consecutive patients were identified who underwent high-superficial musculoaponeurotic system face lifts, with or without additional facial rejuvenation procedures, between January of 2005 and January of 2015. Patients were sent eight FACE-Q scales and were asked to answer questions with regard to their satisfaction. Rank analysis of covariance was used to compare different subgroups. The response rate was 38 percent. Combination of face lift with other procedures resulted in higher satisfaction than face lift alone (p < 0.05). Patients who underwent lipofilling as part of their face lift showed higher satisfaction than patients without lipofilling in three subscales (p < 0.05). Facial rejuvenation surgery, combining a high-superficial musculoaponeurotic system face lift with lipofilling and/or other facial rejuvenation procedures, resulted in a high level of patient satisfaction. The authors recommend the implementation of the FACE-Q by physicians involved in aesthetic facial surgery, to validate their clinical outcomes from a patient's perspective.

  12. Validation of Greyscale-Based Quantitative Ultrasound in Manual Wheelchair Users

    PubMed Central

    Collinger, Jennifer L.; Fullerton, Bradley; Impink, Bradley G.; Koontz, Alicia M.; Boninger, Michael L.

    2010-01-01

    Objective The primary aim of this study is to establish the validity of greyscale-based quantitative ultrasound (QUS) measures of the biceps and supraspinatus tendons. Design Nine QUS measures of the biceps and supraspinatus tendons were computed from ultrasound images collected from sixty-seven manual wheelchair users. Shoulder pathology was measured using questionnaires, physical examination maneuvers, and a clinical ultrasound grading scale. Results Increased age, duration of wheelchair use, and body mass correlated with a darker, more homogenous tendon appearance. Subjects with pain during physical examination tests for biceps tenderness and acromioclavicular joint tenderness exhibited significantly different supraspinatus QUS values. Even when controlling for tendon depth, QUS measures of the biceps tendon differed significantly between subjects with healthy tendons, mild tendinosis, and severe tendinosis. Clinical grading of supraspinatus tendon health was correlated with QUS measures of the supraspinatus tendon. Conclusions Quantitative ultrasound is valid method to quantify tendinopathy and may allow for early detection of tendinosis. Manual wheelchair users are at a high risk for developing shoulder tendon pathology and may benefit from quantitative ultrasound-based research that focuses on identifying interventions designed to reduce this risk. PMID:20407304

  13. Use of Latent Class Analysis to define groups based on validity, cognition, and emotional functioning.

    PubMed

    Morin, Ruth T; Axelrod, Bradley N

    Latent Class Analysis (LCA) was used to classify a heterogeneous sample of neuropsychology data. In particular, we used measures of performance validity, symptom validity, cognition, and emotional functioning to assess and describe latent groups of functioning in these areas. A data-set of 680 neuropsychological evaluation protocols was analyzed using a LCA. Data were collected from evaluations performed for clinical purposes at an urban medical center. A four-class model emerged as the best fitting model of latent classes. The resulting classes were distinct based on measures of performance validity and symptom validity. Class A performed poorly on both performance and symptom validity measures. Class B had intact performance validity and heightened symptom reporting. The remaining two Classes performed adequately on both performance and symptom validity measures, differing only in cognitive and emotional functioning. In general, performance invalidity was associated with worse cognitive performance, while symptom invalidity was associated with elevated emotional distress. LCA appears useful in identifying groups within a heterogeneous sample with distinct performance patterns. Further, the orthogonal nature of performance and symptom validities is supported.

  14. Validity and clinical utility of the DSM-5 severity specifier for bulimia nervosa: results from a multisite sample of patients who received evidence-based treatment.

    PubMed

    Dakanalis, Antonios; Bartoli, Francesco; Caslini, Manuela; Crocamo, Cristina; Zanetti, Maria Assunta; Riva, Giuseppe; Clerici, Massimo; Carrà, Giuseppe

    2017-12-01

    A new "severity specifier" for bulimia nervosa (BN), based on the frequency of inappropriate weight compensatory behaviours (IWCBs), was added to the DSM-5 as a means of documenting heterogeneity and variability in the severity of the disorder. Yet, evidence for its validity in clinical populations, including prognostic significance for treatment outcome, is currently lacking. Existing data from 281 treatment-seeking patients with DSM-5 BN, who received the best available treatment for their disorder (manual-based cognitive behavioural therapy; CBT) in an outpatient setting, were re-analysed to examine whether these patients subgrouped based on the DSM-5 severity levels would show meaningful and consistent differences on (a) a range of clinical variables assessed at pre-treatment and (b) post-treatment abstinence from IWCBs. Results highlight that the mild, moderate, severe, and extreme severity groups were statistically distinguishable on 22 variables assessed at pre-treatment regarding eating disorder pathological features, maintenance factors of BN, associated (current) and lifetime psychopathology, social maladjustment and illness-specific functional impairment, and abstinence outcome. Mood intolerance, a maintenance factor of BN but external to eating disorder pathological features (typically addressed within CBT), emerged as the primary clinical variable distinguishing the severity groups showing a differential treatment response. Overall, the findings speak to the concurrent and predictive validity of the new DSM-5 severity criterion for BN and are important because a common benchmark informing patients, clinicians, and researchers about severity of the disorder and allowing severity fluctuation and patient's progress to be tracked does not exist so far. Implications for future research are outlined.

  15. Face and construct validity of a computer-based virtual reality simulator for ERCP.

    PubMed

    Bittner, James G; Mellinger, John D; Imam, Toufic; Schade, Robert R; Macfadyen, Bruce V

    2010-02-01

    Currently, little evidence supports computer-based simulation for ERCP training. To determine face and construct validity of a computer-based simulator for ERCP and assess its perceived utility as a training tool. Novice and expert endoscopists completed 2 simulated ERCP cases by using the GI Mentor II. Virtual Education and Surgical Simulation Laboratory, Medical College of Georgia. Outcomes included times to complete the procedure, reach the papilla, and use fluoroscopy; attempts to cannulate the papilla, pancreatic duct, and common bile duct; and number of contrast injections and complications. Subjects assessed simulator graphics, procedural accuracy, difficulty, haptics, overall realism, and training potential. Only when performance data from cases A and B were combined did the GI Mentor II differentiate novices and experts based on times to complete the procedure, reach the papilla, and use fluoroscopy. Across skill levels, overall opinions were similar regarding graphics (moderately realistic), accuracy (similar to clinical ERCP), difficulty (similar to clinical ERCP), overall realism (moderately realistic), and haptics. Most participants (92%) claimed that the simulator has definite training potential or should be required for training. Small sample size, single institution. The GI Mentor II demonstrated construct validity for ERCP based on select metrics. Most subjects thought that the simulated graphics, procedural accuracy, and overall realism exhibit face validity. Subjects deemed it a useful training tool. Study repetition involving more participants and cases may help confirm results and establish the simulator's ability to differentiate skill levels based on ERCP-specific metrics.

  16. Convergent Validity Evidence regarding the Validity of the Chilean Standards-Based Teacher Evaluation System

    ERIC Educational Resources Information Center

    Santelices, Maria Veronica; Taut, Sandy

    2011-01-01

    This paper describes convergent validity evidence regarding the mandatory, standards-based Chilean national teacher evaluation system (NTES). The study examined whether NTES identifies--and thereby rewards or punishes--the "right" teachers as high- or low-performing. We collected in-depth teaching performance data on a sample of 58…

  17. Validation of a virtual reality-based simulator for shoulder arthroscopy.

    PubMed

    Rahm, Stefan; Germann, Marco; Hingsammer, Andreas; Wieser, Karl; Gerber, Christian

    2016-05-01

    This study was to determine face and construct validity of a new virtual reality-based shoulder arthroscopy simulator which uses passive haptic feedback. Fifty-one participants including 25 novices (<20 shoulder arthroscopies) and 26 experts (>100 shoulder arthroscopies) completed two tests: for assessment of face validity, a questionnaire was filled out concerning quality of simulated reality and training potential using a 7-point Likert scale (range 1-7). Construct validity was tested by comparing simulator metrics (operation time in seconds, camera and grasper pathway in centimetre and grasper openings) between novices and experts test results. Overall simulated reality was rated high with a median value of 5.5 (range 2.8-7) points. Training capacity scored a median value of 5.8 (range 3-7) points. Experts were significantly faster in the diagnostic test with a median of 91 (range 37-208) s than novices with 1177 (range 81-383) s (p < 0.0001) and in the therapeutic test 102 (range 58-283) s versus 229 (range 114-399) s (p < 0.0001). Similar results were seen in the other metric values except in the camera pathway in the therapeutic test. The tested simulator achieved high scores in terms of realism and training capability. It reliably discriminated between novices and experts. Further improvements of the simulator, especially in the field of therapeutic arthroscopy, might improve its value as training and assessment tool for shoulder arthroscopy skills. II.

  18. A ferrofluid based energy harvester: Computational modeling, analysis, and experimental validation

    NASA Astrophysics Data System (ADS)

    Liu, Qi; Alazemi, Saad F.; Daqaq, Mohammed F.; Li, Gang

    2018-03-01

    A computational model is described and implemented in this work to analyze the performance of a ferrofluid based electromagnetic energy harvester. The energy harvester converts ambient vibratory energy into an electromotive force through a sloshing motion of a ferrofluid. The computational model solves the coupled Maxwell's equations and Navier-Stokes equations for the dynamic behavior of the magnetic field and fluid motion. The model is validated against experimental results for eight different configurations of the system. The validated model is then employed to study the underlying mechanisms that determine the electromotive force of the energy harvester. Furthermore, computational analysis is performed to test the effect of several modeling aspects, such as three-dimensional effect, surface tension, and type of the ferrofluid-magnetic field coupling on the accuracy of the model prediction.

  19. Validity in work-based assessment: expanding our horizons.

    PubMed

    Govaerts, Marjan; van der Vleuten, Cees P M

    2013-12-01

    Although work-based assessments (WBA) may come closest to assessing habitual performance, their use for summative purposes is not undisputed. Most criticism of WBA stems from approaches to validity consistent with the quantitative psychometric framework. However, there is increasing research evidence that indicates that the assumptions underlying the predictive, deterministic framework of psychometrics may no longer hold. In this discussion paper we argue that meaningfulness and appropriateness of current validity evidence can be called into question and that we need alternative strategies to assessment and validity inquiry that build on current theories of learning and performance in complex and dynamic workplace settings. Drawing from research in various professional fields we outline key issues within the mechanisms of learning, competence and performance in the context of complex social environments and illustrate their relevance to WBA. In reviewing recent socio-cultural learning theory and research on performance and performance interpretations in work settings, we demonstrate that learning, competence (as inferred from performance) as well as performance interpretations are to be seen as inherently contextualised, and can only be under-stood 'in situ'. Assessment in the context of work settings may, therefore, be more usefully viewed as a socially situated interpretive act. We propose constructivist-interpretivist approaches towards WBA in order to capture and understand contextualised learning and performance in work settings. Theoretical assumptions underlying interpretivist assessment approaches call for a validity theory that provides the theoretical framework and conceptual tools to guide the validation process in the qualitative assessment inquiry. Basic principles of rigour specific to qualitative research have been established, and they can and should be used to determine validity in interpretivist assessment approaches. If used properly, these

  20. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    PubMed

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current

  1. Validation of Medicaid claims-based diagnosis of myocardial infarction using an HIV clinical cohort

    PubMed Central

    Brouwer, Emily S.; Napravnik, Sonia; Eron, Joseph J; Simpson, Ross J; Brookhart, M. Alan; Stalzer, Brant; Vinikoor, Michael; Floris-Moore, Michelle; Stürmer, Til

    2014-01-01

    Background In non-experimental comparative effectiveness research using healthcare databases, outcome measurements must be validated to evaluate and potentially adjust for misclassification bias. We aimed to validate claims-based myocardial infarction algorithms in a Medicaid population using an HIV clinical cohort as the gold standard. Methods Medicaid administrative data were obtained for the years 2002–2008 and linked to the UNC CFAR HIV Clinical Cohort based on social security number, first name and last name and myocardial infarction were adjudicated. Sensitivity, specificity, positive predictive value, and negative predictive value were calculated. Results There were 1,063 individuals included. Over a median observed time of 2.5 years, 17 had a myocardial infarction. Specificity ranged from 0.979–0.993 with the highest specificity obtained using criteria with the ICD-9 code in the primary and secondary position and a length of stay ≥ 3 days. Sensitivity of myocardial infarction ascertainment varied from 0.588–0.824 depending on algorithm. Conclusion: Specificities of varying claims-based myocardial infarction ascertainment criteria are high but small changes impact positive predictive value in a cohort with low incidence. Sensitivities vary based on ascertainment criteria. Type of algorithm used should be prioritized based on study question and maximization of specific validation parameters that will minimize bias while also considering precision. PMID:23604043

  2. The validity and reproducibility of food-frequency questionnaire-based total antioxidant capacity estimates in Swedish women.

    PubMed

    Rautiainen, Susanne; Serafini, Mauro; Morgenstern, Ralf; Prior, Ronald L; Wolk, Alicja

    2008-05-01

    Total antioxidant capacity (TAC) provides an assessment of antioxidant activity and synergistic interactions of redox molecules in foods and plasma. We investigated the validity and reproducibility of food-frequency questionnaire (FFQ)-based TAC estimates assessed by oxygen radical absorbance capacity (ORAC), total radical-trapping antioxidant parameters (TRAP), and ferric-reducing antioxidant power (FRAP) food values. Validity and reproducibility were evaluated in 2 random samples from the Swedish Mammography Cohort. Validity was studied by comparing FFQ-based TAC estimates with one measurement of plasma TAC in 108 women (54-73-y-old dietary supplement nonusers). Reproducibility was studied in 300 women (56-75 y old, 50.7% dietary supplement nonusers) who completed 2 FFQs 1 y apart. Fruit and vegetables (mainly apples, pears, oranges, and berries) were the major contributors to FFQ-based ORAC (56.5%), TRAP (41.7%), and FRAP (38.0%) estimates. In the validity study, whole plasma ORAC was correlated (Pearson) with FFQ-based ORAC (r = 0.35), TRAP (r = 0.31), and FRAP (r = 0.28) estimates from fruit and vegetables. Correlations between lipophilic plasma ORAC and FFQ-based ORAC, TRAP, and FRAP estimates from fruit and vegetables were 0.41, 0.31, and 0.28, and correlations with plasma TRAP estimates were 0.31, 0.30, and 0.28, respectively. Hydrophilic plasma ORAC and plasma FRAP values did not correlate with FFQ-based TAC estimates. Reproducibility, assessed by intraclass correlations, was 0.60, 0.61, and 0.61 for FFQ-based ORAC, TRAP, and FRAP estimates, respectively, from fruit and vegetables. FFQ-based TAC values represent valid and reproducible estimates that may be used in nutritional epidemiology to assess antioxidant intake from foods. Further studies in other populations to confirm these results are needed.

  3. Cooperative Learning: Improving University Instruction by Basing Practice on Validated Theory

    ERIC Educational Resources Information Center

    Johnson, David W.; Johnson, Roger T.; Smith, Karl A.

    2014-01-01

    Cooperative learning is an example of how theory validated by research may be applied to instructional practice. The major theoretical base for cooperative learning is social interdependence theory. It provides clear definitions of cooperative, competitive, and individualistic learning. Hundreds of research studies have validated its basic…

  4. Effective data validation of high-frequency data: time-point-, time-interval-, and trend-based methods.

    PubMed

    Horn, W; Miksch, S; Egghart, G; Popow, C; Paky, F

    1997-09-01

    Real-time systems for monitoring and therapy planning, which receive their data from on-line monitoring equipment and computer-based patient records, require reliable data. Data validation has to utilize and combine a set of fast methods to detect, eliminate, and repair faulty data, which may lead to life-threatening conclusions. The strength of data validation results from the combination of numerical and knowledge-based methods applied to both continuously-assessed high-frequency data and discontinuously-assessed data. Dealing with high-frequency data, examining single measurements is not sufficient. It is essential to take into account the behavior of parameters over time. We present time-point-, time-interval-, and trend-based methods for validation and repair. These are complemented by time-independent methods for determining an overall reliability of measurements. The data validation benefits from the temporal data-abstraction process, which provides automatically derived qualitative values and patterns. The temporal abstraction is oriented on a context-sensitive and expectation-guided principle. Additional knowledge derived from domain experts forms an essential part for all of these methods. The methods are applied in the field of artificial ventilation of newborn infants. Examples from the real-time monitoring and therapy-planning system VIE-VENT illustrate the usefulness and effectiveness of the methods.

  5. Validation Results for LEWICE 3.0

    NASA Technical Reports Server (NTRS)

    Wright, William B.

    2005-01-01

    A research project is underway at NASA Glenn to produce computer software that can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from version 3.0 of this software, which is called LEWICE. This version differs from previous releases in that it incorporates additional thermal analysis capabilities, a pneumatic boot model, interfaces to computational fluid dynamics (CFD) flow solvers and has an empirical model for the supercooled large droplet (SLD) regime. An extensive comparison of the results in a quantifiable manner against the database of ice shapes and collection efficiency that have been generated in the NASA Glenn Icing Research Tunnel (IRT) has also been performed. The complete set of data used for this comparison will eventually be available in a contractor report. This paper will show the differences in collection efficiency between LEWICE 3.0 and experimental data. Due to the large amount of validation data available, a separate report is planned for ice shape comparison. This report will first describe the LEWICE 3.0 model for water collection. A semi-empirical approach was used to incorporate first order physical effects of large droplet phenomena into icing software. Comparisons are then made to every single element two-dimensional case in the water collection database. Each condition was run using the following five assumptions: 1) potential flow, no splashing; 2) potential flow, no splashing with 21 bin drop size distributions and a lift correction (angle of attack adjustment); 3) potential flow, with splashing; 4) Navier-Stokes, no splashing; and 5) Navier-Stokes, with splashing. Quantitative comparisons are shown for impingement limit, maximum water catch, and total collection efficiency. The results show that the predicted results are within the accuracy limits of the experimental data for the majority of cases.

  6. Modeling and validation of photometric characteristics of space targets oriented to space-based observation.

    PubMed

    Wang, Hongyuan; Zhang, Wei; Dong, Aotuo

    2012-11-10

    A modeling and validation method of photometric characteristics of the space target was presented in order to track and identify different satellites effectively. The background radiation characteristics models of the target were built based on blackbody radiation theory. The geometry characteristics of the target were illustrated by the surface equations based on its body coordinate system. The material characteristics of the target surface were described by a bidirectional reflectance distribution function model, which considers the character of surface Gauss statistics and microscale self-shadow and is obtained by measurement and modeling in advance. The contributing surfaces of the target to observation system were determined by coordinate transformation according to the relative position of the space-based target, the background radiation sources, and the observation platform. Then a mathematical model on photometric characteristics of the space target was built by summing reflection components of all the surfaces. Photometric characteristics simulation of the space-based target was achieved according to its given geometrical dimensions, physical parameters, and orbital parameters. Experimental validation was made based on the scale model of the satellite. The calculated results fit well with the measured results, which indicates the modeling method of photometric characteristics of the space target is correct.

  7. Center of pressure based segment inertial parameters validation

    PubMed Central

    Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice; Venture, Gentiane

    2017-01-01

    By proposing efficient methods for estimating Body Segment Inertial Parameters’ (BSIP) estimation and validating them with a force plate, it is possible to improve the inverse dynamic computations that are necessary in multiple research areas. Until today a variety of studies have been conducted to improve BSIP estimation but to our knowledge a real validation has never been completely successful. In this paper, we propose a validation method using both kinematic and kinetic parameters (contact forces) gathered from optical motion capture system and a force plate respectively. To compare BSIPs, we used the measured contact forces (Force plate) as the ground truth, and reconstructed the displacements of the Center of Pressure (COP) using inverse dynamics from two different estimation techniques. Only minor differences were seen when comparing the estimated segment masses. Their influence on the COP computation however is large and the results show very distinguishable patterns of the COP movements. Improving BSIP techniques is crucial and deviation from the estimations can actually result in large errors. This method could be used as a tool to validate BSIP estimation techniques. An advantage of this approach is that it facilitates the comparison between BSIP estimation methods and more specifically it shows the accuracy of those parameters. PMID:28662090

  8. Non-invasive transcranial ultrasound therapy based on a 3D CT scan: protocol validation and in vitro results

    NASA Astrophysics Data System (ADS)

    Marquet, F.; Pernot, M.; Aubry, J.-F.; Montaldo, G.; Marsac, L.; Tanter, M.; Fink, M.

    2009-05-01

    A non-invasive protocol for transcranial brain tissue ablation with ultrasound is studied and validated in vitro. The skull induces strong aberrations both in phase and in amplitude, resulting in a severe degradation of the beam shape. Adaptive corrections of the distortions induced by the skull bone are performed using a previous 3D computational tomography scan acquisition (CT) of the skull bone structure. These CT scan data are used as entry parameters in a FDTD (finite differences time domain) simulation of the full wave propagation equation. A numerical computation is used to deduce the impulse response relating the targeted location and the ultrasound therapeutic array, thus providing a virtual time-reversal mirror. This impulse response is then time-reversed and transmitted experimentally by a therapeutic array positioned exactly in the same referential frame as the one used during CT scan acquisitions. In vitro experiments are conducted on monkey and human skull specimens using an array of 300 transmit elements working at a central frequency of 1 MHz. These experiments show a precise refocusing of the ultrasonic beam at the targeted location with a positioning error lower than 0.7 mm. The complete validation of this transcranial adaptive focusing procedure paves the way to in vivo animal and human transcranial HIFU investigations.

  9. Clinical Validity of the ADI-R in a US-Based Latino Population

    ERIC Educational Resources Information Center

    Vanegas, Sandra B.; Magaña, Sandra; Morales, Miguel; McNamara, Ellyn

    2016-01-01

    The Autism Diagnostic Interview-Revised (ADI-R) has been validated as a tool to aid in the diagnosis of Autism; however, given the growing diversity in the United States, the ADI-R must be validated for different languages and cultures. This study evaluates the validity of the ADI-R in a US-based Latino, Spanish-speaking population of 50 children…

  10. Design and validation of general biology learning program based on scientific inquiry skills

    NASA Astrophysics Data System (ADS)

    Cahyani, R.; Mardiana, D.; Noviantoro, N.

    2018-03-01

    Scientific inquiry is highly recommended to teach science. The reality in the schools and colleges is that many educators still have not implemented inquiry learning because of their lack of understanding. The study aims to1) analyze students’ difficulties in learning General Biology, 2) design General Biology learning program based on multimedia-assisted scientific inquiry learning, and 3) validate the proposed design. The method used was Research and Development. The subjects of the study were 27 pre-service students of general elementary school/Islamic elementary schools. The workflow of program design includes identifying learning difficulties of General Biology, designing course programs, and designing instruments and assessment rubrics. The program design is made for four lecture sessions. Validation of all learning tools were performed by expert judge. The results showed that: 1) there are some problems identified in General Biology lectures; 2) the designed products include learning programs, multimedia characteristics, worksheet characteristics, and, scientific attitudes; and 3) expert validation shows that all program designs are valid and can be used with minor revisions. The first section in your paper.

  11. Validation of hot-poured crack sealant performance-based guidelines.

    DOT National Transportation Integrated Search

    2017-06-01

    This report summarizes a comprehensive research effort to validate thresholds for performance-based guidelines and : grading system for hot-poured asphalt crack sealants. A series of performance tests were established in earlier research and : includ...

  12. Assessing Procedural Competence: Validity Considerations.

    PubMed

    Pugh, Debra M; Wood, Timothy J; Boulet, John R

    2015-10-01

    Simulation-based medical education (SBME) offers opportunities for trainees to learn how to perform procedures and to be assessed in a safe environment. However, SBME research studies often lack robust evidence to support the validity of the interpretation of the results obtained from tools used to assess trainees' skills. The purpose of this paper is to describe how a validity framework can be applied when reporting and interpreting the results of a simulation-based assessment of skills related to performing procedures. The authors discuss various sources of validity evidence because they relate to SBME. A case study is presented.

  13. Application of regional physically-based landslide early warning model: tuning of the input parameters and validation of the results

    NASA Astrophysics Data System (ADS)

    D'Ambrosio, Michele; Tofani, Veronica; Rossi, Guglielmo; Salvatici, Teresa; Tacconi Stefanelli, Carlo; Rosi, Ascanio; Benedetta Masi, Elena; Pazzi, Veronica; Vannocci, Pietro; Catani, Filippo; Casagli, Nicola

    2017-04-01

    runs in real-time by assimilating weather data and uses Monte Carlo simulation techniques to manage the geotechnical and hydrological input parameters. In this context, an assessment of the factors controlling the geotechnical and hydrological features is crucial in order to understand the occurrence of slope instability mechanisms and to provide reliable forecasting of the hydrogeological hazard occurrence, especially in relation to weather events. In particular, the model and the soil characterization were applied in back analysis, in order to assess the reliability of the model through validation of the results with landslide events that occurred during the period. The validation was performed on four past events of intense rainfall that have affected Valle d'Aosta region between 2008 and 2010 years triggering fast shallows landslides. The simulations show substantial improvement of the reliability of the results compared to the use of literature parameters. A statistical analysis of the HIRESSS outputs in terms of failure probability has been carried out in order to define reliable alert levels for regional landslide early warning systems.

  14. Meeting report: Validation of toxicogenomics-based test systems: ECVAM-ICCVAM/NICEATM considerations for regulatory use.

    PubMed

    Corvi, Raffaella; Ahr, Hans-Jürgen; Albertini, Silvio; Blakey, David H; Clerici, Libero; Coecke, Sandra; Douglas, George R; Gribaldo, Laura; Groten, John P; Haase, Bernd; Hamernik, Karen; Hartung, Thomas; Inoue, Tohru; Indans, Ian; Maurici, Daniela; Orphanides, George; Rembges, Diana; Sansone, Susanna-Assunta; Snape, Jason R; Toda, Eisaku; Tong, Weida; van Delft, Joost H; Weis, Brenda; Schechtman, Leonard M

    2006-03-01

    This is the report of the first workshop "Validation of Toxicogenomics-Based Test Systems" held 11-12 December 2003 in Ispra, Italy. The workshop was hosted by the European Centre for the Validation of Alternative Methods (ECVAM) and organized jointly by ECVAM, the U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), and the National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM). The primary aim of the workshop was for participants to discuss and define principles applicable to the validation of toxicogenomics platforms as well as validation of specific toxicologic test methods that incorporate toxicogenomics technologies. The workshop was viewed as an opportunity for initiating a dialogue between technologic experts, regulators, and the principal validation bodies and for identifying those factors to which the validation process would be applicable. It was felt that to do so now, as the technology is evolving and associated challenges are identified, would be a basis for the future validation of the technology when it reaches the appropriate stage. Because of the complexity of the issue, different aspects of the validation of toxicogenomics-based test methods were covered. The three focus areas include a) biologic validation of toxicogenomics-based test methods for regulatory decision making, b) technical and bioinformatics aspects related to validation, and c) validation issues as they relate to regulatory acceptance and use of toxicogenomics-based test methods. In this report we summarize the discussions and describe in detail the recommendations for future direction and priorities.

  15. Meeting Report: Validation of Toxicogenomics-Based Test Systems: ECVAM–ICCVAM/NICEATM Considerations for Regulatory Use

    PubMed Central

    Corvi, Raffaella; Ahr, Hans-Jürgen; Albertini, Silvio; Blakey, David H.; Clerici, Libero; Coecke, Sandra; Douglas, George R.; Gribaldo, Laura; Groten, John P.; Haase, Bernd; Hamernik, Karen; Hartung, Thomas; Inoue, Tohru; Indans, Ian; Maurici, Daniela; Orphanides, George; Rembges, Diana; Sansone, Susanna-Assunta; Snape, Jason R.; Toda, Eisaku; Tong, Weida; van Delft, Joost H.; Weis, Brenda; Schechtman, Leonard M.

    2006-01-01

    This is the report of the first workshop “Validation of Toxicogenomics-Based Test Systems” held 11–12 December 2003 in Ispra, Italy. The workshop was hosted by the European Centre for the Validation of Alternative Methods (ECVAM) and organized jointly by ECVAM, the U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), and the National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM). The primary aim of the workshop was for participants to discuss and define principles applicable to the validation of toxicogenomics platforms as well as validation of specific toxicologic test methods that incorporate toxicogenomics technologies. The workshop was viewed as an opportunity for initiating a dialogue between technologic experts, regulators, and the principal validation bodies and for identifying those factors to which the validation process would be applicable. It was felt that to do so now, as the technology is evolving and associated challenges are identified, would be a basis for the future validation of the technology when it reaches the appropriate stage. Because of the complexity of the issue, different aspects of the validation of toxicogenomics-based test methods were covered. The three focus areas include a) biologic validation of toxicogenomics-based test methods for regulatory decision making, b) technical and bioinformatics aspects related to validation, and c) validation issues as they relate to regulatory acceptance and use of toxicogenomics-based test methods. In this report we summarize the discussions and describe in detail the recommendations for future direction and priorities. PMID:16507466

  16. Validation of highly reliable, real-time knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1988-01-01

    Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.

  17. Noninvasive assessment of mitral inertness: clinical results with numerical model validation

    NASA Technical Reports Server (NTRS)

    Firstenberg, M. S.; Greenberg, N. L.; Smedira, N. G.; McCarthy, P. M.; Garcia, M. J.; Thomas, J. D.

    2001-01-01

    Inertial forces (Mdv/dt) are a significant component of transmitral flow, but cannot be measured with Doppler echo. We validated a method of estimating Mdv/dt. Ten patients had a dual sensor transmitral (TM) catheter placed during cardiac surgery. Doppler and 2D echo was performed while acquiring LA and LV pressures. Mdv/dt was determined from the Bernoulli equation using Doppler velocities and TM gradients. Results were compared with numerical modeling. TM gradients (range: 1.04-14.24 mmHg) consisted of 74.0 +/- 11.0% inertial forcers (range: 0.6-12.9 mmHg). Multivariate analysis predicted Mdv/dt = -4.171(S/D (RATIO)) + 0.063(LAvolume-max) + 5. Using this equation, a strong relationship was obtained for the clinical dataset (y=0.98x - 0.045, r=0.90) and the results of numerical modeling (y=0.96x - 0.16, r=0.84). TM gradients are mainly inertial and, as validated by modeling, can be estimated with echocardiography.

  18. Expert system verification and validation survey. Delivery 2: Survey results

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The purpose is to determine the state-of-the-practice in Verification and Validation (V and V) of Expert Systems (ESs) on current NASA and industry applications. This is the first task of the series which has the ultimate purpose of ensuring that adequate ES V and V tools and techniques are available for Space Station Knowledge Based Systems development. The strategy for determining the state-of-the-practice is to check how well each of the known ES V and V issues are being addressed and to what extent they have impacted the development of ESs.

  19. Uncertainty estimates of purity measurements based on current information: toward a "live validation" of purity methods.

    PubMed

    Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech

    2012-12-01

    To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.

  20. Initial Validation and Results of Geoscience Laser Altimeter System Optical Properties Retrievals

    NASA Technical Reports Server (NTRS)

    Hlavka, Dennis L.; Hart, W. D.; Pal, S. P.; McGill, M.; Spinhirne, J. D.

    2004-01-01

    Verification of Geoscience Laser Altimeter System (GLAS) optical retrievals is . problematic in that passage over ground sites is both instantaneous and sparse plus space-borne passive sensors such as MODIS are too frequently out of sync with the GLAS position. In October 2003, the GLAS Validation Experiment was executed from NASA Dryden Research Center, California to greatly increase validation possibilities. The high-altitude NASA ER-2 aircraft and onboard instrumentation of Cloud Physics Lidar (CPL), MODIS Airborne Simulator (MAS), and/or MODIS/ASTER Airborne Simulator (MASTER) under-flew seven orbit tracks of GLAS for cirrus, smoke, and urban pollution optical properties inter-comparisons. These highly calibrated suite of instruments are the best data set yet to validate GLAS atmospheric parameters. In this presentation, we will focus on the inter-comparison with GLAS and CPL and draw preliminary conclusions about the accuracies of the GLAS 532nm retrievals of optical depth, extinction, backscatter cross section, and calculated extinction-to-backscatter ratio. Comparisons to an AERONET/MPL ground-based site at Monterey, California will be attempted. Examples of GLAS operational optical data products will be shown.

  1. Validation of column-based chromatography processes for the purification of proteins. Technical report No. 14.

    PubMed

    2008-01-01

    PDA Technical Report No. 14 has been written to provide current best practices, such as application of risk-based decision making, based in sound science to provide a foundation for the validation of column-based chromatography processes and to expand upon information provided in Technical Report No. 42, Process Validation of Protein Manufacturing. The intent of this technical report is to provide an integrated validation life-cycle approach that begins with the use of process development data for the definition of operational parameters as a basis for validation, confirmation, and/or minor adjustment to these parameters at manufacturing scale during production of conformance batches and maintenance of the validated state throughout the product's life cycle.

  2. Initial validation of a web-based self-administered neuropsychological test battery for older adults and seniors

    PubMed Central

    Hansen, Tor Ivar; Haferstrom, Elise Christina D.; Brunner, Jan F.; Lehn, Hanne; Håberg, Asta Kristine

    2015-01-01

    Introduction: Computerized neuropsychological tests are effective in assessing different cognitive domains, but are often limited by the need of proprietary hardware and technical staff. Web-based tests can be more accessible and flexible. We aimed to investigate validity, effects of computer familiarity, education, and age, and the feasibility of a new web-based self-administered neuropsychological test battery (Memoro) in older adults and seniors. Method: A total of 62 (37 female) participants (mean age 60.7 years) completed the Memoro web-based neuropsychological test battery and a traditional battery composed of similar tests intended to measure the same cognitive constructs. Participants were assessed on computer familiarity and how they experienced the two batteries. To properly test the factor structure of Memoro, an additional factor analysis in 218 individuals from the HUNT population was performed. Results: Comparing Memoro to traditional tests, we observed good concurrent validity (r = .49–.63). The performance on the traditional and Memoro test battery was consistent, but differences in raw scores were observed with higher scores on verbal memory and lower in spatial memory in Memoro. Factor analysis indicated two factors: verbal and spatial memory. There were no correlations between test performance and computer familiarity after adjustment for age or age and education. Subjects reported that they preferred web-based testing as it allowed them to set their own pace, and they did not feel scrutinized by an administrator. Conclusions: Memoro showed good concurrent validity compared to neuropsychological tests measuring similar cognitive constructs. Based on the current results, Memoro appears to be a tool that can be used to assess cognitive function in older and senior adults. Further work is necessary to ascertain its validity and reliability. PMID:26009791

  3. Screening for cognitive impairment in older individuals. Validation study of a computer-based test.

    PubMed

    Green, R C; Green, J; Harrison, J M; Kutner, M H

    1994-08-01

    This study examined the validity of a computer-based cognitive test that was recently designed to screen the elderly for cognitive impairment. Criterion-related validity was examined by comparing test scores of impaired patients and normal control subjects. Construct-related validity was computed through correlations between computer-based subtests and related conventional neuropsychological subtests. University center for memory disorders. Fifty-two patients with mild cognitive impairment by strict clinical criteria and 50 unimpaired, age- and education-matched control subjects. Control subjects were rigorously screened by neurological, neuropsychological, imaging, and electrophysiological criteria to identify and exclude individuals with occult abnormalities. Using a cut-off total score of 126, this computer-based instrument had a sensitivity of 0.83 and a specificity of 0.96. Using a prevalence estimate of 10%, predictive values, positive and negative, were 0.70 and 0.96, respectively. Computer-based subtests correlated significantly with conventional neuropsychological tests measuring similar cognitive domains. Thirteen (17.8%) of 73 volunteers with normal medical histories were excluded from the control group, with unsuspected abnormalities on standard neuropsychological tests, electroencephalograms, or magnetic resonance imaging scans. Computer-based testing is a valid screening methodology for the detection of mild cognitive impairment in the elderly, although this particular test has important limitations. Broader applications of computer-based testing will require extensive population-based validation. Future studies should recognize that normal control subjects without a history of disease who are typically used in validation studies may have a high incidence of unsuspected abnormalities on neurodiagnostic studies.

  4. Simulated Driving Assessment (SDA) for teen drivers: results from a validation study.

    PubMed

    McDonald, Catherine C; Kandadai, Venk; Loeb, Helen; Seacrist, Thomas S; Lee, Yi-Ching; Winston, Zachary; Winston, Flaura K

    2015-06-01

    Driver error and inadequate skill are common critical reasons for novice teen driver crashes, yet few validated, standardised assessments of teen driving skills exist. The purpose of this study is to evaluate the construct and criterion validity of a newly developed Simulated Driving Assessment (SDA) for novice teen drivers. The SDA's 35 min simulated drive incorporates 22 variations of the most common teen driver crash configurations. Driving performance was compared for 21 inexperienced teens (age 16-17 years, provisional license ≤90 days) and 17 experienced adults (age 25-50 years, license ≥5 years, drove ≥100 miles per week, no collisions or moving violations ≤3 years). SDA driving performance (Error Score) was based on driving safety measures derived from simulator and eye-tracking data. Negative driving outcomes included simulated collisions or run-off-the-road incidents. A professional driving evaluator/instructor (DEI Score) reviewed videos of SDA performance. The SDA demonstrated construct validity: (1) teens had a higher Error Score than adults (30 vs. 13, p=0.02); (2) For each additional error committed, the RR of a participant's propensity for a simulated negative driving outcome increased by 8% (95% CI 1.05 to 1.10, p<0.01). The SDA-demonstrated criterion validity: Error Score was correlated with DEI Score (r=-0.66, p<0.001). This study supports the concept of validated simulated driving tests like the SDA to assess novice driver skill in complex and hazardous driving scenarios. The SDA, as a standard protocol to evaluate teen driver performance, has the potential to facilitate screening and assessment of teen driving readiness and could be used to guide targeted skill training. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  5. [Core factors of schizophrenia structure based on PANSS and SAPS/SANS results. Discerning and head-to-head comparisson of PANSS and SASPS/SANS validity].

    PubMed

    Masiak, Marek; Loza, Bartosz

    2004-01-01

    A lot of inconsistencies across dimensional studies of schizophrenia(s) are being unveiled. These problems are strongly related to the methodological aspects of collecting data and specific statistical analyses. Psychiatrists have developed lots of psychopathological models derived from analytic studies based on SAPS/SANS (the Scale for the Assessment of Positive Symptoms/the Scale for the Assessment of Negative Symptoms) and PANSS (The Positive and Negative Syndrome Scale). The unique validation of parallel two independent factor models was performed--ascribed to the same illness and based on different diagnostic scales--to investigate indirect methodological causes of clinical discrepancies. 100 newly admitted patients (mean age--33.5, 18-45, males--64, females--36, hospitalised on average 5.15 times) with paranoid schizophrenia (according to ICD-10) were scored and analysed using PANSS and SAPS/SANS during psychotic exacerbation. All patients were treated with neuroleptics of various kinds with 410mg equivalents of chlorpromazine (atypicals:typicals --> 41:59). Factor analyses were applied to basic results (with principal component analysis, normalised varimax rotation). Investing the cross-model validity, canonical analysis was applied. Models of schizophrenia varied from 3 to 5 factors. PANSS model included: positive, negative, disorganisation, cognitive and depressive components and SAPS/SANS model was dominated by positive, negative and disorganisation factors. The SAPS/SANS accounted for merely 48% of the PANSS common variances. The SAPS/SANS combined measurement preferentially (67% of canonical variance) targeted positive-negative dichotomy. Respectively, PANSS shared positive-negative phenomenology in 35% of its own variance. The general concept of five-dimensionality in paranoid schizophrenia looks clinically more heuristic and statistically more stabilised.

  6. Analytic Validation of Immunohistochemical Assays: A Comparison of Laboratory Practices Before and After Introduction of an Evidence-Based Guideline.

    PubMed

    Fitzgibbons, Patrick L; Goldsmith, Jeffrey D; Souers, Rhona J; Fatheree, Lisa A; Volmar, Keith E; Stuart, Lauren N; Nowak, Jan A; Astles, J Rex; Nakhleh, Raouf E

    2017-09-01

    - Laboratories must demonstrate analytic validity before any test can be used clinically, but studies have shown inconsistent practices in immunohistochemical assay validation. - To assess changes in immunohistochemistry analytic validation practices after publication of an evidence-based laboratory practice guideline. - A survey on current immunohistochemistry assay validation practices and on the awareness and adoption of a recently published guideline was sent to subscribers enrolled in one of 3 relevant College of American Pathologists proficiency testing programs and to additional nonsubscribing laboratories that perform immunohistochemical testing. The results were compared with an earlier survey of validation practices. - Analysis was based on responses from 1085 laboratories that perform immunohistochemical staining. Of 1057 responses, 65.4% (691) were aware of the guideline recommendations before this survey was sent and 79.9% (550 of 688) of those have already adopted some or all of the recommendations. Compared with the 2010 survey, a significant number of laboratories now have written validation procedures for both predictive and nonpredictive marker assays and specifications for the minimum numbers of cases needed for validation. There was also significant improvement in compliance with validation requirements, with 99% (100 of 102) having validated their most recently introduced predictive marker assay, compared with 74.9% (326 of 435) in 2010. The difficulty in finding validation cases for rare antigens and resource limitations were cited as the biggest challenges in implementing the guideline. - Dissemination of the 2014 evidence-based guideline validation practices had a positive impact on laboratory performance; some or all of the recommendations have been adopted by nearly 80% of respondents.

  7. Measuring Practitioner Attitudes toward Evidence-Based Treatments: A Validation Study

    ERIC Educational Resources Information Center

    Ashcraft, Rindee G. P.; Foster, Sharon L.; Lowery, Amy E.; Henggeler, Scott W.; Chapman, Jason E.; Rowland, Melisa D.

    2011-01-01

    A better understanding of clinicians' attitudes toward evidence-based treatments (EBT) will presumably enhance the transfer of EBTs for substance-abusing adolescents from research to clinical application. The reliability and validity of two measures of therapist attitudes toward EBT were examined: the Evidence-Based Practice Attitude Scale…

  8. Do short courses in evidence based medicine improve knowledge and skills? Validation of Berlin questionnaire and before and after study of courses in evidence based medicine

    PubMed Central

    Fritsche, L; Greenhalgh, T; Falck-Ytter, Y; Neumayer, H-H; Kunz, R

    2002-01-01

    Objective To develop and validate an instrument for measuring knowledge and skills in evidence based medicine and to investigate whether short courses in evidence based medicine lead to a meaningful increase in knowledge and skills. Design Development and validation of an assessment instrument and before and after study. Setting Various postgraduate short courses in evidence based medicine in Germany. Participants The instrument was validated with experts in evidence based medicine, postgraduate doctors, and medical students. The effect of courses was assessed by postgraduate doctors from medical and surgical backgrounds. Intervention Intensive 3 day courses in evidence based medicine delivered through tutor facilitated small groups. Main outcome measure Increase in knowledge and skills. Results The questionnaire distinguished reliably between groups with different expertise in evidence based medicine. Experts attained a threefold higher average score than students. Postgraduates who had not attended a course performed better than students but significantly worse than experts. Knowledge and skills in evidence based medicine increased after the course by 57% (mean score before course 6.3 (SD 2.9) v 9.9 (SD 2.8), P<0.001). No difference was found among experts or students in absence of an intervention. Conclusions The instrument reliably assessed knowledge and skills in evidence based medicine. An intensive 3 day course in evidence based medicine led to a significant increase in knowledge and skills. What is already known on this topicNumerous observational studies have investigated the impact of teaching evidence based medicine to healthcare professionals, with conflicting resultsMost of the studies were of poor methodological qualityWhat this study addsAn instrument assessing basic knowledge and skills required for practising evidence based medicine was developed and validatedAn intensive 3 day course on evidence based medicine for doctors from various backgrounds

  9. Adaptation and validation of the Evidence-Based Practice Belief and Implementation scales for French-speaking Swiss nurses and allied healthcare providers.

    PubMed

    Verloo, Henk; Desmedt, Mario; Morin, Diane

    2017-09-01

    To evaluate two psychometric properties of the French versions of the Evidence-Based Practice Beliefs and Evidence-Based Practice Implementation scales, namely their internal consistency and construct validity. The Evidence-Based Practice Beliefs and Evidence-Based Practice Implementation scales developed by Melnyk et al. are recognised as valid, reliable instruments in English. However, no psychometric validation for their French versions existed. Secondary analysis of a cross sectional survey. Source data came from a cross-sectional descriptive study sample of 382 nurses and other allied healthcare providers. Cronbach's alpha was used to evaluate internal consistency, and principal axis factor analysis and varimax rotation were computed to determine construct validity. The French Evidence-Based Practice Beliefs and Evidence-Based Practice Implementation scales showed excellent reliability, with Cronbach's alphas close to the scores established by Melnyk et al.'s original versions. Principal axis factor analysis showed medium-to-high factor loading scores without obtaining collinearity. Principal axis factor analysis with varimax rotation of the 16-item Evidence-Based Practice Beliefs scale resulted in a four-factor loading structure. Principal axis factor analysis with varimax rotation of the 17-item Evidence-Based Practice Implementation scale revealed a two-factor loading structure. Further research should attempt to understand why the French Evidence-Based Practice Implementation scale showed a two-factor loading structure but Melnyk et al.'s original has only one. The French versions of the Evidence-Based Practice Beliefs and Evidence-Based Practice Implementation scales can both be considered valid and reliable instruments for measuring Evidence-Based Practice beliefs and implementation. The results suggest that the French Evidence-Based Practice Beliefs and Evidence-Based Practice Implementation scales are valid and reliable and can therefore be used to

  10. Examining construct and predictive validity of the Health-IT Usability Evaluation Scale: confirmatory factor analysis and structural equation modeling results.

    PubMed

    Yen, Po-Yin; Sousa, Karen H; Bakken, Suzanne

    2014-10-01

    In a previous study, we developed the Health Information Technology Usability Evaluation Scale (Health-ITUES), which is designed to support customization at the item level. Such customization matches the specific tasks/expectations of a health IT system while retaining comparability at the construct level, and provides evidence of its factorial validity and internal consistency reliability through exploratory factor analysis. In this study, we advanced the development of Health-ITUES to examine its construct validity and predictive validity. The health IT system studied was a web-based communication system that supported nurse staffing and scheduling. Using Health-ITUES, we conducted a cross-sectional study to evaluate users' perception toward the web-based communication system after system implementation. We examined Health-ITUES's construct validity through first and second order confirmatory factor analysis (CFA), and its predictive validity via structural equation modeling (SEM). The sample comprised 541 staff nurses in two healthcare organizations. The CFA (n=165) showed that a general usability factor accounted for 78.1%, 93.4%, 51.0%, and 39.9% of the explained variance in 'Quality of Work Life', 'Perceived Usefulness', 'Perceived Ease of Use', and 'User Control', respectively. The SEM (n=541) supported the predictive validity of Health-ITUES, explaining 64% of the variance in intention for system use. The results of CFA and SEM provide additional evidence for the construct and predictive validity of Health-ITUES. The customizability of Health-ITUES has the potential to support comparisons at the construct level, while allowing variation at the item level. We also illustrate application of Health-ITUES across stages of system development. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  11. Health Sciences-Evidence Based Practice questionnaire (HS-EBP) for measuring transprofessional evidence-based practice: Creation, development and psychometric validation

    PubMed Central

    Fernández-Domínguez, Juan Carlos; de Pedro-Gómez, Joan Ernest; Morales-Asencio, José Miguel; Sastre-Fullana, Pedro; Sesé-Abad, Albert

    2017-01-01

    Introduction Most of the EBP measuring instruments available to date present limitations both in the operationalisation of the construct and also in the rigour of their psychometric development, as revealed in the literature review performed. The aim of this paper is to provide rigorous and adequate reliability and validity evidence of the scores of a new transdisciplinary psychometric tool, the Health Sciences Evidence-Based Practice (HS-EBP), for measuring the construct EBP in Health Sciences professionals. Methods A pilot study and a subsequent two-stage validation test sample were conducted to progressively refine the instrument until a reduced 60-item version with a five-factor latent structure. Reliability was analysed through both Cronbach’s alpha coefficient and intraclass correlations (ICC). Latent structure was contrasted using confirmatory factor analysis (CFA) following a model comparison aproach. Evidence of criterion validity of the scores obtained was achieved by considering attitudinal resistance to change, burnout, and quality of professional life as criterion variables; while convergent validity was assessed using the Spanish version of the Evidence-Based Practice Questionnaire (EBPQ-19). Results Adequate evidence of both reliability and ICC was obtained for the five dimensions of the questionnaire. According to the CFA model comparison, the best fit corresponded to the five-factor model (RMSEA = 0.049; CI 90% RMSEA = [0.047; 0.050]; CFI = 0.99). Adequate criterion and convergent validity evidence was also provided. Finally, the HS-EBP showed the capability to find differences between EBP training levels as an important evidence of decision validity. Conclusions Reliability and validity evidence obtained regarding the HS-EBP confirm the adequate operationalisation of the EBP construct as a process put into practice to respond to every clinical situation arising in the daily practice of professionals in health sciences (transprofessional). The

  12. A consensus-based framework for design, validation, and implementation of simulation-based training curricula in surgery.

    PubMed

    Zevin, Boris; Levy, Jeffrey S; Satava, Richard M; Grantcharov, Teodor P

    2012-10-01

    Simulation-based training can improve technical and nontechnical skills in surgery. To date, there is no consensus on the principles for design, validation, and implementation of a simulation-based surgical training curriculum. The aim of this study was to define such principles and formulate them into an interoperable framework using international expert consensus based on the Delphi method. Literature was reviewed, 4 international experts were queried, and consensus conference of national and international members of surgical societies was held to identify the items for the Delphi survey. Forty-five international experts in surgical education were invited to complete the online survey by ranking each item on a Likert scale from 1 to 5. Consensus was predefined as Cronbach's α ≥0.80. Items that 80% of experts ranked as ≥4 were included in the final framework. Twenty-four international experts with training in general surgery (n = 11), orthopaedic surgery (n = 2), obstetrics and gynecology (n = 3), urology (n = 1), plastic surgery (n = 1), pediatric surgery (n = 1), otolaryngology (n = 1), vascular surgery (n = 1), military (n = 1), and doctorate-level educators (n = 2) completed the iterative online Delphi survey. Consensus among participants was achieved after one round of the survey (Cronbach's α = 0.91). The final framework included predevelopment analysis; cognitive, psychomotor, and team-based training; curriculum validation evaluation and improvement; and maintenance of training. The Delphi methodology allowed for determination of international expert consensus on the principles for design, validation, and implementation of a simulation-based surgical training curriculum. These principles were formulated into a framework that can be used internationally across surgical specialties as a step-by-step guide for the development and validation of future simulation-based training curricula. Copyright © 2012 American College of Surgeons. Published by

  13. Contemporary Test Validity in Theory and Practice: A Primer for Discipline-Based Education Researchers.

    PubMed

    Reeves, Todd D; Marbach-Ad, Gili

    2016-01-01

    Most discipline-based education researchers (DBERs) were formally trained in the methods of scientific disciplines such as biology, chemistry, and physics, rather than social science disciplines such as psychology and education. As a result, DBERs may have never taken specific courses in the social science research methodology--either quantitative or qualitative--on which their scholarship often relies so heavily. One particular aspect of (quantitative) social science research that differs markedly from disciplines such as biology and chemistry is the instrumentation used to quantify phenomena. In response, this Research Methods essay offers a contemporary social science perspective on test validity and the validation process. The instructional piece explores the concepts of test validity, the validation process, validity evidence, and key threats to validity. The essay also includes an in-depth example of a validity argument and validation approach for a test of student argument analysis. In addition to DBERs, this essay should benefit practitioners (e.g., lab directors, faculty members) in the development, evaluation, and/or selection of instruments for their work assessing students or evaluating pedagogical innovations. © 2016 T. D. Reeves and G. Marbach-Ad. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  14. Community-Based Validation of the Social Phobia Screener (SOPHS).

    PubMed

    Batterham, Philip J; Mackinnon, Andrew J; Christensen, Helen

    2017-10-01

    There is a need for brief, accurate screening scales for social anxiety disorder to enable better identification of the disorder in research and clinical settings. A five-item social anxiety screener, the Social Phobia Screener (SOPHS), was developed to address this need. The screener was validated in two samples: (a) 12,292 Australian young adults screened for a clinical trial, including 1,687 participants who completed a phone-based clinical interview and (b) 4,214 population-based Australian adults recruited online. The SOPHS (78% sensitivity, 72% specificity) was found to have comparable screening performance to the Social Phobia Inventory (77% sensitivity, 71% specificity) and Mini-Social Phobia Inventory (74% sensitivity, 73% specificity) relative to clinical criteria in the trial sample. In the population-based sample, the SOPHS was also accurate (95% sensitivity, 73% specificity) in identifying Diagnostic and Statistical Manual of Mental Disorders-Fifth edition social anxiety disorder. The SOPHS is a valid and reliable screener for social anxiety that is freely available for use in research and clinical settings.

  15. Design, testing and validation of an innovative web-based instrument to evaluate school meal quality.

    PubMed

    Patterson, Emma; Quetel, Anna-Karin; Lilja, Karin; Simma, Marit; Olsson, Linnea; Elinder, Liselotte Schäfer

    2013-06-01

    To develop a feasible, valid, reliable web-based instrument to objectively evaluate school meal quality in Swedish primary schools. The construct 'school meal quality' was operationalized by an expert panel into six domains, one of which was nutritional quality. An instrument was drafted and pilot-tested. Face validity was evaluated by the panel. Feasibility was established via a large national study. Food-based criteria to predict the nutritional adequacy of school meals in terms of fat quality, iron, vitamin D and fibre content were developed. Predictive validity was evaluated by comparing the nutritional adequacy of school menus based on these criteria with the results from a nutritional analysis. Inter-rater reliability was also assessed. The instrument was developed between 2010 and 2012. It is designed for use in all primary schools by school catering and/or management representatives. A pilot-test of eighty schools in Stockholm (autumn 2010) and a further test of feasibility in 191 schools nationally (spring 2011). The four nutrient-specific food-based criteria predicted nutritional adequacy with sensitivity ranging from 0.85 to 1.0, specificity from 0.45 to 1.0 and accuracy from 0.67 to 1.0. The sample in the national study was statistically representative and the majority of users rated the questionnaire positively, suggesting the instrument is feasible. The inter-rater reliability was fair to almost perfect for continuous variables and agreement was ≥ 67 % for categorical variables. An innovative web-based system to comprehensively monitor school meal quality across several domains, with validated questions in the nutritional domain, is available in Sweden for the first time.

  16. Screening tool for oropharyngeal dysphagia in stroke - Part I: evidence of validity based on the content and response processes.

    PubMed

    Almeida, Tatiana Magalhães de; Cola, Paula Cristina; Pernambuco, Leandro de Araújo; Magalhães, Hipólito Virgílio; Magnoni, Carlos Daniel; Silva, Roberta Gonçalves da

    2017-08-17

    The aim of the present study was to identify the evidence of validity based on the content and response process of the Rastreamento de Disfagia Orofaríngea no Acidente Vascular Encefálico (RADAVE; "Screening Tool for Oropharyngeal Dysphagia in Stroke"). The criteria used to elaborate the questions were based on a literature review. A group of judges consisting of 19 different health professionals evaluated the relevance and representativeness of the questions, and the results were analyzed using the Content Validity Index. In order to evidence validity based on the response processes, 23 health professionals administered the screening tool and analyzed the questions using a structured scale and cognitive interview. The RADAVE structured to be applied in two stages. The first version consisted of 18 questions in stage I and 11 questions in stage II. Eight questions in stage I and four in stage II did not reach the minimum Content Validity Index, requiring reformulation by the authors. The cognitive interview demonstrated some misconceptions. New adjustments were made and the final version was produced with 12 questions in stage I and six questions in stage II. It was possible to develop a screening tool for dysphagia in stroke with adequate evidence of validity based on content and response processes. Both validity evidences obtained so far allowed to adjust the screening tool in relation to its construct. The next studies will analyze the other evidences of validity and the measures of accuracy.

  17. CONTENT VALIDITY OF SYMPTOM-BASED MEASURES FOR DIABETIC, CHEMOTHERAPY, AND HIV PERIPHERAL NEUROPATHY

    PubMed Central

    GEWANDTER, JENNIFER S.; BURKE, LAURIE; CAVALETTI, GUIDO; DWORKIN, ROBERT H.; GIBBONS, CHRISTOPHER; GOVER, TONY D.; HERRMANN, DAVID N.; MCARTHUR, JUSTIN C.; MCDERMOTT, MICHAEL P.; RAPPAPORT, BOB A.; REEVE, BRYCE B.; RUSSELL, JAMES W.; SMITH, A. GORDON; SMITH, SHANNON M.; TURK, DENNIS C.; VINIK, AARON I.; FREEMAN, ROY

    2017-01-01

    Introduction No treatments for axonal peripheral neuropathy are approved by the United States Food and Drug Administration (FDA). Although patient- and clinician-reported outcomes are central to evaluating neuropathy symptoms, they can be difficult to assess accurately. The inability to identify efficacious treatments for peripheral neuropathies could be due to invalid or inadequate outcome measures. Methods This systematic review examined the content validity of symptom-based measures of diabetic peripheral neuropathy, HIV neuropathy, and chemotherapy-induced peripheral neuropathy. Results Use of all FDA-recommended methods to establish content validity was only reported for 2 of 18 measures. Multiple sensory and motor symptoms were included in measures for all 3 conditions; these included numbness, tingling, pain, allodynia, difficulty walking, and cramping. Autonomic symptoms were less frequently included. Conclusions Given significant overlap in symptoms between neuropathy etiologies, a measure with content validity for multiple neuropathies with supplemental disease-specific modules could be of great value in the development of disease-modifying treatments for peripheral neuropathies. PMID:27447116

  18. Torso-Tank Validation of High-Resolution Electrogastrography (EGG): Forward Modelling, Methodology and Results.

    PubMed

    Calder, Stefan; O'Grady, Greg; Cheng, Leo K; Du, Peng

    2018-04-27

    Electrogastrography (EGG) is a non-invasive method for measuring gastric electrical activity. Recent simulation studies have attempted to extend the current clinical utility of the EGG, in particular by providing a theoretical framework for distinguishing specific gastric slow wave dysrhythmias. In this paper we implement an experimental setup called a 'torso-tank' with the aim of expanding and experimentally validating these previous simulations. The torso-tank was developed using an adult male torso phantom with 190 electrodes embedded throughout the torso. The gastric slow waves were reproduced using an artificial current source capable of producing 3D electrical fields. Multiple gastric dysrhythmias were reproduced based on high-resolution mapping data from cases of human gastric dysfunction (gastric re-entry, conduction blocks and ectopic pacemakers) in addition to normal test data. Each case was recorded and compared to the previously-presented simulated results. Qualitative and quantitative analyses were performed to define the accuracy showing [Formula: see text] 1.8% difference, [Formula: see text] 0.99 correlation, and [Formula: see text] 0.04 normalised RMS error between experimental and simulated findings. These results reaffirm previous findings and these methods in unison therefore present a promising morphological-based methodology for advancing the understanding and clinical applications of EGG.

  19. A cross-validation trial of an Internet-based prevention program for alcohol and cannabis: Preliminary results from a cluster randomised controlled trial.

    PubMed

    Champion, Katrina E; Newton, Nicola C; Stapinski, Lexine; Slade, Tim; Barrett, Emma L; Teesson, Maree

    2016-01-01

    Replication is an important step in evaluating evidence-based preventive interventions and is crucial for establishing the generalizability and wider impact of a program. Despite this, few replications have occurred in the prevention science field. This study aims to fill this gap by conducting a cross-validation trial of the Climate Schools: Alcohol and Cannabis course, an Internet-based prevention program, among a new cohort of Australian students. A cluster randomized controlled trial was conducted among 1103 students (Mage: 13.25 years) from 13 schools in Australia in 2012. Six schools received the Climate Schools course and 7 schools were randomized to a control group (health education as usual). All students completed a self-report survey at baseline and immediately post-intervention. Mixed-effects regressions were conducted for all outcome variables. Outcomes assessed included alcohol and cannabis use, knowledge and intentions to use these substances. Compared to the control group, immediately post-intervention the intervention group reported significantly greater alcohol (d = 0.67) and cannabis knowledge (d = 0.72), were less likely to have consumed any alcohol (even a sip or taste) in the past 6 months (odds ratio = 0.69) and were less likely to intend on using alcohol in the future (odds ratio = 0.62). However, there were no effects for binge drinking, cannabis use or intentions to use cannabis. These preliminary results provide some support for the Internet-based Climate Schools: Alcohol and Cannabis course as a feasible way of delivering alcohol and cannabis prevention. Intervention effects for alcohol and cannabis knowledge were consistent with results from the original trial; however, analyses of longer-term follow-up data are needed to provide a clearer indication of the efficacy of the intervention, particularly in relation to behavioral changes. © The Royal Australian and New Zealand College of Psychiatrists 2015.

  20. Validation techniques for fault emulation of SRAM-based FPGAs

    DOE PAGES

    Quinn, Heather; Wirthlin, Michael

    2015-08-07

    A variety of fault emulation systems have been created to study the effect of single-event effects (SEEs) in static random access memory (SRAM) based field-programmable gate arrays (FPGAs). These systems are useful for augmenting radiation-hardness assurance (RHA) methodologies for verifying the effectiveness for mitigation techniques; understanding error signatures and failure modes in FPGAs; and failure rate estimation. For radiation effects researchers, it is important that these systems properly emulate how SEEs manifest in FPGAs. If the fault emulation systems does not mimic the radiation environment, the system will generate erroneous data and incorrect predictions of behavior of the FPGA inmore » a radiation environment. Validation determines whether the emulated faults are reasonable analogs to the radiation-induced faults. In this study we present methods for validating fault emulation systems and provide several examples of validated FPGA fault emulation systems.« less

  1. Validity and Reliability of the Turkish Version of Needs Based Biopsychosocial Distress Instrument for Cancer Patients (CANDI)

    PubMed Central

    Beyhun, Nazim Ercument; Can, Gamze; Tiryaki, Ahmet; Karakullukcu, Serdar; Bulut, Bekir; Yesilbas, Sehbal; Kavgaci, Halil; Topbas, Murat

    2016-01-01

    Background Needs based biopsychosocial distress instrument for cancer patients (CANDI) is a scale based on needs arising due to the effects of cancer. Objectives The aim of this research was to determine the reliability and validity of the CANDI scale in the Turkish language. Patients and Methods The study was performed with the participation of 172 cancer patients aged 18 and over. Factor analysis (principal components analysis) was used to assess construct validity. Criterion validities were tested by computing Spearman correlation between CANDI and hospital anxiety depression scale (HADS), and brief symptom inventory (BSI) (convergent validity) and quality of life scales (FACT-G) (divergent validity). Test-retest reliabilities and internal consistencies were measured with intraclass correlation (ICC) and Cronbach-α. Results A three-factor solution (emotional, physical and social) was found with factor analysis. Internal reliability (α = 0.94) and test-retest reliability (ICC = 0.87) were significantly high. Correlations between CANDI and HADS (rs = 0.67), and BSI (rs = 0.69) and FACT-G (rs = -0.76) were moderate and significant in the expected direction. Conclusions CANDI is a valid and reliable scale in cancer patients with a three-factor structure (emotional, physical and social) in the Turkish language. PMID:27621931

  2. Validation of an administrative claims-based diagnostic code for pneumonia in a US-based commercially insured COPD population

    PubMed Central

    Kern, David M; Davis, Jill; Williams, Setareh A; Tunceli, Ozgur; Wu, Bingcao; Hollis, Sally; Strange, Charlie; Trudo, Frank

    2015-01-01

    Objective To estimate the accuracy of claims-based pneumonia diagnoses in COPD patients using clinical information in medical records as the reference standard. Methods Selecting from a repository containing members’ data from 14 regional United States health plans, this validation study identified pneumonia diagnoses within a group of patients initiating treatment for COPD between March 1, 2009 and March 31, 2012. Patients with ≥1 claim for pneumonia (International Classification of Diseases Version 9-CM code 480.xx–486.xx) were identified during the 12 months following treatment initiation. A subset of 800 patients was randomly selected to abstract medical record data (paper based and electronic) for a target sample of 400 patients, to estimate validity within 5% margin of error. Positive predictive value (PPV) was calculated for the claims diagnosis of pneumonia relative to the reference standard, defined as a documented diagnosis in the medical record. Results A total of 388 records were reviewed; 311 included a documented pneumonia diagnosis, indicating 80.2% (95% confidence interval [CI]: 75.8% to 84.0%) of claims-identified pneumonia diagnoses were validated by the medical charts. Claims-based diagnoses in inpatient or emergency departments (n=185) had greater PPV versus outpatient settings (n=203), 87.6% (95% CI: 81.9%–92.0%) versus 73.4% (95% CI: 66.8%–79.3%), respectively. Claims-diagnoses verified with paper-based charts had similar PPV as the overall study sample, 80.2% (95% CI: 71.1%–87.5%), and higher PPV than those linked to electronic medical records, 73.3% (95% CI: 65.5%–80.2%). Combined paper-based and electronic records had a higher PPV, 87.6% (95% CI: 80.9%–92.6%). Conclusion Administrative claims data indicating a diagnosis of pneumonia in COPD patients are supported by medical records. The accuracy of a medical record diagnosis of pneumonia remains unknown. With increased use of claims data in medical research, COPD researchers

  3. Validation techniques of agent based modelling for geospatial simulations

    NASA Astrophysics Data System (ADS)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  4. Designing, validation and feasibility of a yoga-based intervention for elderly

    PubMed Central

    Hariprasad, V. R.; Varambally, S.; Varambally, P. T.; Thirthalli, J.; Basavaraddi, I. V.; Gangadhar, B. N.

    2013-01-01

    Context: Ageing is an unavoidable facet of life. Yogic practices have been reported to promote healthy aging. Previous studies have used either yoga therapy interventions derived from a particular school of yoga or have tested specific yogic practices like meditation. Aims: This study reports the development, validation and feasibility of a yoga-based intervention for elderly with or without mild cognitive impairment. Settings and Design: The study was conducted at the Advanced Centre for Yoga, National Institute for Mental Health and Neurosciences, Bangalore. The module was developed, validated, and then pilot-tested on volunteers. Materials and Methods: The first part of the study consisted of designing of a yoga module based on traditional and contemporary yogic literature. This yoga module along with the three case vignettes of elderly with cognitive impairment were sent to 10 yoga experts to help develop the intended yoga-based intervention. In the second part, the feasibility of the developed yoga-based intervention was tested. Results: Experts (n=10) opined the yoga-based intervention will be useful in improving cognition in elderly, but with some modifications. Frequent supervised yoga sessions, regular follow-ups, addition/deletion/modifications of yoga postures were some of the suggestions. Ten elderly consented and eight completed the pilot testing of the intervention. All of them were able to perform most of the Sukṣmavyayāma, Prāṇāyāma and Nādānusaṇdhāna (meditation) technique without difficulty. Some of the participants (n=3) experienced difficulty in performing postures seated on the ground. Most of the older adults experienced difficulty in remembering and completing entire sequence of yoga-based intervention independently. Conclusions: The yoga based intervention is feasible in the elderly with cognitive impairment. Testing with a larger sample of older adults is warranted. PMID:24049197

  5. Applying Kane's Validity Framework to a Simulation Based Assessment of Clinical Competence

    ERIC Educational Resources Information Center

    Tavares, Walter; Brydges, Ryan; Myre, Paul; Prpic, Jason; Turner, Linda; Yelle, Richard; Huiskamp, Maud

    2018-01-01

    Assessment of clinical competence is complex and inference based. Trustworthy and defensible assessment processes must have favourable evidence of validity, particularly where decisions are considered high stakes. We aimed to organize, collect and interpret validity evidence for a high stakes simulation based assessment strategy for certifying…

  6. Quantification of construction waste prevented by BIM-based design validation: Case studies in South Korea.

    PubMed

    Won, Jongsung; Cheng, Jack C P; Lee, Ghang

    2016-03-01

    Waste generated in construction and demolition processes comprised around 50% of the solid waste in South Korea in 2013. Many cases show that design validation based on building information modeling (BIM) is an effective means to reduce the amount of construction waste since construction waste is mainly generated due to improper design and unexpected changes in the design and construction phases. However, the amount of construction waste that could be avoided by adopting BIM-based design validation has been unknown. This paper aims to estimate the amount of construction waste prevented by a BIM-based design validation process based on the amount of construction waste that might be generated due to design errors. Two project cases in South Korea were studied in this paper, with 381 and 136 design errors detected, respectively during the BIM-based design validation. Each design error was categorized according to its cause and the likelihood of detection before construction. The case studies show that BIM-based design validation could prevent 4.3-15.2% of construction waste that might have been generated without using BIM. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Single-Item Screening for Agoraphobic Symptoms: Validation of a Web-Based Audiovisual Screening Instrument

    PubMed Central

    van Ballegooijen, Wouter; Riper, Heleen; Donker, Tara; Martin Abello, Katherina; Marks, Isaac; Cuijpers, Pim

    2012-01-01

    The advent of web-based treatments for anxiety disorders creates a need for quick and valid online screening instruments, suitable for a range of social groups. This study validates a single-item multimedia screening instrument for agoraphobia, part of the Visual Screener for Common Mental Disorders (VS-CMD), and compares it with the text-based agoraphobia items of the PDSS-SR. The study concerned 85 subjects in an RCT of the effects of web-based therapy for panic symptoms. The VS-CMD item and items 4 and 5 of the PDSS-SR were validated by comparing scores to the outcomes of the CIDI diagnostic interview. Screening for agoraphobia was found moderately valid for both the multimedia item (sensitivity.81, specificity.66, AUC.734) and the text-based items (AUC.607–.697). Single-item multimedia screening for anxiety disorders should be further developed and tested in the general population and in patient, illiterate and immigrant samples. PMID:22844391

  8. Validation of a Crowdsourcing Methodology for Developing a Knowledge Base of Related Problem-Medication Pairs

    PubMed Central

    Wright, A.; Krousel-Wood, M.; Thomas, E. J.; McCoy, J. A.; Sittig, D. F.

    2015-01-01

    Summary Background Clinical knowledge bases of problem-medication pairs are necessary for many informatics solutions that improve patient safety, such as clinical summarization. However, developing these knowledge bases can be challenging. Objective We sought to validate a previously developed crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large, non-university health care system with a widely used, commercially available electronic health record. Methods We first retrieved medications and problems entered in the electronic health record by clinicians during routine care during a six month study period. Following the previously published approach, we calculated the link frequency and link ratio for each pair then identified a threshold cutoff for estimated problem-medication pair appropriateness through clinician review; problem-medication pairs meeting the threshold were included in the resulting knowledge base. We selected 50 medications and their gold standard indications to compare the resulting knowledge base to the pilot knowledge base developed previously and determine its recall and precision. Results The resulting knowledge base contained 26,912 pairs, had a recall of 62.3% and a precision of 87.5%, and outperformed the pilot knowledge base containing 11,167 pairs from the previous study, which had a recall of 46.9% and a precision of 83.3%. Conclusions We validated the crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large non-university health care system with a widely used, commercially available electronic health record, indicating that the approach may be generalizable across healthcare settings and clinical systems. Further research is necessary to better evaluate the knowledge, to compare crowdsourcing with other approaches, and to evaluate if incorporating the knowledge into electronic health records improves patient outcomes. PMID:26171079

  9. Validation results of specifications for motion control interoperability

    NASA Astrophysics Data System (ADS)

    Szabo, Sandor; Proctor, Frederick M.

    1997-01-01

    The National Institute of Standards and Technology (NIST) is participating in the Department of Energy Technologies Enabling Agile Manufacturing (TEAM) program to establish interface standards for machine tool, robot, and coordinate measuring machine controllers. At NIST, the focus is to validate potential application programming interfaces (APIs) that make it possible to exchange machine controller components with a minimal impact on the rest of the system. This validation is taking place in the enhanced machine controller (EMC) consortium and is in cooperation with users and vendors of motion control equipment. An area of interest is motion control, including closed-loop control of individual axes and coordinated path planning. Initial tests of the motion control APIs are complete. The APIs were implemented on two commercial motion control boards that run on two different machine tools. The results for a baseline set of APIs look promising, but several issues were raised. These include resolving differing approaches in how motions are programmed and defining a standard measurement of performance for motion control. This paper starts with a summary of the process used in developing a set of specifications for motion control interoperability. Next, the EMC architecture and its classification of motion control APIs into two classes, Servo Control and Trajectory Planning, are reviewed. Selected APIs are presented to explain the basic functionality and some of the major issues involved in porting the APIs to other motion controllers. The paper concludes with a summary of the main issues and ways to continue the standards process.

  10. Reliability and Validity of Assessing User Satisfaction With Web-Based Health Interventions.

    PubMed

    Boß, Leif; Lehr, Dirk; Reis, Dorota; Vis, Christiaan; Riper, Heleen; Berking, Matthias; Ebert, David Daniel

    2016-08-31

    The perspective of users should be taken into account in the evaluation of Web-based health interventions. Assessing the users' satisfaction with the intervention they receive could enhance the evidence for the intervention effects. Thus, there is a need for valid and reliable measures to assess satisfaction with Web-based health interventions. The objective of this study was to analyze the reliability, factorial structure, and construct validity of the Client Satisfaction Questionnaire adapted to Internet-based interventions (CSQ-I). The psychometric quality of the CSQ-I was analyzed in user samples from 2 separate randomized controlled trials evaluating Web-based health interventions, one from a depression prevention intervention (sample 1, N=174) and the other from a stress management intervention (sample 2, N=111). At first, the underlying measurement model of the CSQ-I was analyzed to determine the internal consistency. The factorial structure of the scale and the measurement invariance across groups were tested by multigroup confirmatory factor analyses. Additionally, the construct validity of the scale was examined by comparing satisfaction scores with the primary clinical outcome. Multigroup confirmatory analyses on the scale yielded a one-factorial structure with a good fit (root-mean-square error of approximation =.09, comparative fit index =.96, standardized root-mean-square residual =.05) that showed partial strong invariance across the 2 samples. The scale showed very good reliability, indicated by McDonald omegas of .95 in sample 1 and .93 in sample 2. Significant correlations with change in depressive symptoms (r=-.35, P<.001) and perceived stress (r=-.48, P<.001) demonstrated the construct validity of the scale. The proven internal consistency, factorial structure, and construct validity of the CSQ-I indicate a good overall psychometric quality of the measure to assess the user's general satisfaction with Web-based interventions for depression and

  11. Is Learner Self-Assessment Reliable and Valid in a Web-Based Portfolio Environment for High School Students?

    ERIC Educational Resources Information Center

    Chang, Chi-Cheng; Liang, Chaoyun; Chen, Yi-Hui

    2013-01-01

    This study explored the reliability and validity of Web-based portfolio self-assessment. Participants were 72 senior high school students enrolled in a computer application course. The students created learning portfolios, viewed peers' work, and performed self-assessment on the Web-based portfolio assessment system. The results indicated: 1)…

  12. Validation of a Crowdsourcing Methodology for Developing a Knowledge Base of Related Problem-Medication Pairs.

    PubMed

    McCoy, A B; Wright, A; Krousel-Wood, M; Thomas, E J; McCoy, J A; Sittig, D F

    2015-01-01

    Clinical knowledge bases of problem-medication pairs are necessary for many informatics solutions that improve patient safety, such as clinical summarization. However, developing these knowledge bases can be challenging. We sought to validate a previously developed crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large, non-university health care system with a widely used, commercially available electronic health record. We first retrieved medications and problems entered in the electronic health record by clinicians during routine care during a six month study period. Following the previously published approach, we calculated the link frequency and link ratio for each pair then identified a threshold cutoff for estimated problem-medication pair appropriateness through clinician review; problem-medication pairs meeting the threshold were included in the resulting knowledge base. We selected 50 medications and their gold standard indications to compare the resulting knowledge base to the pilot knowledge base developed previously and determine its recall and precision. The resulting knowledge base contained 26,912 pairs, had a recall of 62.3% and a precision of 87.5%, and outperformed the pilot knowledge base containing 11,167 pairs from the previous study, which had a recall of 46.9% and a precision of 83.3%. We validated the crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large non-university health care system with a widely used, commercially available electronic health record, indicating that the approach may be generalizable across healthcare settings and clinical systems. Further research is necessary to better evaluate the knowledge, to compare crowdsourcing with other approaches, and to evaluate if incorporating the knowledge into electronic health records improves patient outcomes.

  13. Do placebo based validation standards mimic real batch products behaviour? Case studies.

    PubMed

    Bouabidi, A; Talbi, M; Bouklouze, A; El Karbane, M; Bourichi, H; El Guezzar, M; Ziemons, E; Hubert, Ph; Rozet, E

    2011-06-01

    Analytical methods validation is a mandatory step to evaluate the ability of developed methods to provide accurate results for their routine application. Validation usually involves validation standards or quality control samples that are prepared in placebo or reconstituted matrix made of a mixture of all the ingredients composing the drug product except the active substance or the analyte under investigation. However, one of the main concerns that can be made with this approach is that it may lack an important source of variability that come from the manufacturing process. The question that remains at the end of the validation step is about the transferability of the quantitative performance from validation standards to real authentic drug product samples. In this work, this topic is investigated through three case studies. Three analytical methods were validated using the commonly spiked placebo validation standards at several concentration levels as well as using samples coming from authentic batch samples (tablets and syrups). The results showed that, depending on the type of response function used as calibration curve, there were various degrees of differences in the results accuracy obtained with the two types of samples. Nonetheless the use of spiked placebo validation standards was showed to mimic relatively well the quantitative behaviour of the analytical methods with authentic batch samples. Adding these authentic batch samples into the validation design may help the analyst to select and confirm the most fit for purpose calibration curve and thus increase the accuracy and reliability of the results generated by the method in routine application. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Verification and Validation of Adaptive and Intelligent Systems with Flight Test Results

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Larson, Richard R.

    2009-01-01

    F-15 IFCS project goals are: a) Demonstrate Control Approaches that can Efficiently Optimize Aircraft Performance in both Normal and Failure Conditions [A] & [B] failures. b) Advance Neural Network-Based Flight Control Technology for New Aerospace Systems Designs with a Pilot in the Loop. Gen II objectives include; a) Implement and Fly a Direct Adaptive Neural Network Based Flight Controller; b) Demonstrate the Ability of the System to Adapt to Simulated System Failures: 1) Suppress Transients Associated with Failure; 2) Re-Establish Sufficient Control and Handling of Vehicle for Safe Recovery. c) Provide Flight Experience for Development of Verification and Validation Processes for Flight Critical Neural Network Software.

  15. Predictive validity of cannabis consumption measures: Results from a national longitudinal study.

    PubMed

    Buu, Anne; Hu, Yi-Han; Pampati, Sanjana; Arterberry, Brooke J; Lin, Hsien-Chang

    2017-10-01

    Validating the utility of cannabis consumption measures for predicting later cannabis related symptomatology or progression to cannabis use disorder (CUD) is crucial for prevention and intervention work that may use consumption measures for quick screening. This study examined whether cannabis use quantity and frequency predicted CUD symptom counts, progression to onset of CUD, and persistence of CUD. Data from the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) at Wave 1 (2001-2002) and Wave 2 (2004-2005) were used to identify three risk samples: (1) current cannabis users at Wave 1 who were at risk for having CUD symptoms at Wave 2; (2) current users without lifetime CUD who were at risk for incident CUD; and (3) current users with past-year CUD who were at risk for persistent CUD. Logistic regression and zero-inflated Poisson models were used to examine the longitudinal effect of cannabis consumption on CUD outcomes. Higher frequency of cannabis use predicted lower likelihood of being symptom-free but it did not predict the severity of CUD symptomatology. Higher frequency of cannabis use also predicted higher likelihood of progression to onset of CUD and persistence of CUD. Cannabis use quantity, however, did not predict any of the developmental stages of CUD symptomatology examined in this study. This study has provided a new piece of evidence to support the predictive validity of cannabis use frequency based on national longitudinal data. The result supports the common practice of including frequency items in cannabis screening tools. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Results from an Independent View on The Validation of Safety-Critical Space Systems

    NASA Astrophysics Data System (ADS)

    Silva, N.; Lopes, R.; Esper, A.; Barbosa, R.

    2013-08-01

    The Independent verification and validation (IV&V) has been a key process for decades, and is considered in several international standards. One of the activities described in the “ESA ISVV Guide” is the independent test verification (stated as Integration/Unit Test Procedures and Test Data Verification). This activity is commonly overlooked since customers do not really see the added value of checking thoroughly the validation team work (could be seen as testing the tester's work). This article presents the consolidated results of a large set of independent test verification activities, including the main difficulties, results obtained and advantages/disadvantages for the industry of these activities. This study will support customers in opting-in or opting-out for this task in future IV&V contracts since we provide concrete results from real case studies in the space embedded systems domain.

  17. Development and validation of a new population-based simulation model of osteoarthritis in New Zealand.

    PubMed

    Wilson, R; Abbott, J H

    2018-04-01

    To describe the construction and preliminary validation of a new population-based microsimulation model developed to analyse the health and economic burden and cost-effectiveness of treatments for knee osteoarthritis (OA) in New Zealand (NZ). We developed the New Zealand Management of Osteoarthritis (NZ-MOA) model, a discrete-time state-transition microsimulation model of the natural history of radiographic knee OA. In this article, we report on the model structure, derivation of input data, validation of baseline model parameters against external data sources, and validation of model outputs by comparison of the predicted population health loss with previous estimates. The NZ-MOA model simulates both the structural progression of radiographic knee OA and the stochastic development of multiple disease symptoms. Input parameters were sourced from NZ population-based data where possible, and from international sources where NZ-specific data were not available. The predicted distributions of structural OA severity and health utility detriments associated with OA were externally validated against other sources of evidence, and uncertainty resulting from key input parameters was quantified. The resulting lifetime and current population health-loss burden was consistent with estimates of previous studies. The new NZ-MOA model provides reliable estimates of the health loss associated with knee OA in the NZ population. The model structure is suitable for analysis of the effects of a range of potential treatments, and will be used in future work to evaluate the cost-effectiveness of recommended interventions within the NZ healthcare system. Copyright © 2018 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  18. Development and external validation of a prostate health index-based nomogram for predicting prostate cancer

    PubMed Central

    Zhu, Yao; Han, Cheng-Tao; Zhang, Gui-Ming; Liu, Fang; Ding, Qiang; Xu, Jian-Feng; Vidal, Adriana C.; Freedland, Stephen J.; Ng, Chi-Fai; Ye, Ding-Wei

    2015-01-01

    To develop and externally validate a prostate health index (PHI)-based nomogram for predicting the presence of prostate cancer (PCa) at biopsy in Chinese men with prostate-specific antigen 4–10 ng/mL and normal digital rectal examination (DRE). 347 men were recruited from two hospitals between 2012 and 2014 to develop a PHI-based nomogram to predict PCa. To validate these results, we used a separate cohort of 230 men recruited at another center between 2008 and 2013. Receiver operator curves (ROC) were used to assess the ability to predict PCa. A nomogram was derived from the multivariable logistic regression model and its accuracy was assessed by the area under the ROC (AUC). PHI achieved the highest AUC of 0.839 in the development cohort compared to the other predictors (p < 0.001). Including age and prostate volume, a PHI-based nomogram was constructed and rendered an AUC of 0.877 (95% CI 0.813–0.938). The AUC of the nomogram in the validation cohort was 0.786 (95% CI 0.678–0.894). In clinical effectiveness analyses, the PHI-based nomogram reduced unnecessary biopsies from 42.6% to 27% using a 5% threshold risk of PCa to avoid biopsy with no increase in the number of missed cases relative to conventional biopsy decision. PMID:26471350

  19. Designing and validation of a yoga-based intervention for obsessive compulsive disorder.

    PubMed

    Bhat, Shubha; Varambally, Shivarama; Karmani, Sneha; Govindaraj, Ramajayam; Gangadhar, B N

    2016-06-01

    Some yoga-based practices have been found to be useful for patients with obsessive compulsive disorder (OCD). The authors could not find a validated yoga therapy module available for OCD. This study attempted to formulate a generic yoga-based intervention module for OCD. A yoga module was designed based on traditional and contemporary yoga literature. The module was sent to 10 yoga experts for content validation. The experts rated the usefulness of the practices on a scale of 1-5 (5 = extremely useful). The final version of the module was pilot-tested on patients with OCD (n = 17) for both feasibility and effect on symptoms. Eighty-eight per cent (22 out of 25) of the items in the initial module were retained, with modifications in the module as suggested by the experts along with patients' inputs and authors' experience. The module was found to be feasible and showed an improvement in symptoms of OCD on total Yale-Brown Obsessive-Compulsive Scale (YBOCS) score (p = 0.001). A generic yoga therapy module for OCD was validated by experts in the field and found feasible to practice in patients. A decrease in the symptom scores was also found following yoga practice of 2 weeks. Further clinical validation is warranted to confirm efficacy.

  20. Development and Validation of a Symptom-Based Activity Index for Adults with Eosinophilic Esophagitis

    PubMed Central

    Schoepfer, Alain M.; Straumann, Alex; Panczak, Radoslaw; Coslovsky, Michael; Kuehni, Claudia E.; Maurer, Elisabeth; Haas, Nadine A.; Romero, Yvonne; Hirano, Ikuo; Alexander, Jeffrey A.; Gonsalves, Nirmala; Furuta, Glenn T.; Dellon, Evan S.; Leung, John; Collins, Margaret H.; Bussmann, Christian; Netzer, Peter; Gupta, Sandeep K.; Aceves, Seema S.; Chehade, Mirna; Moawad, Fouad J.; Enders, Felicity T.; Yost, Kathleen J.; Taft, Tiffany H.; Kern, Emily; Zwahlen, Marcel; Safroneeva, Ekaterina

    2015-01-01

    BACKGROUND & AIMS Standardized instruments are needed to assess the activity of eosinophilic esophagitis (EoE), to provide endpoints for clinical trials and observational studies. We aimed to develop and validate a patient-reported outcome (PRO) instrument and score, based on items that could account for variations in patients’ assessments of disease severity. We also evaluated relationships between patients’ assessment of disease severity and EoE-associated endoscopic, histologic, and laboratory findings. METHODS We collected information from 186 patients with EoE in Switzerland and the US (69.4% male; median age, 43 years) via surveys (n = 135), focus groups (n = 27), and semi-structured interviews (n = 24). Items were generated for the instruments to assess biologic activity based on physician input. Linear regression was used to quantify the extent to which variations in patient-reported disease characteristics could account for variations in patients’ assessment of EoE severity. The PRO instrument was prospectively used in 153 adult patients with EoE (72.5% male; median age, 38 years), and validated in an independent group of 120 patients with EoE (60.8% male; median age, 40.5 years). RESULTS Seven PRO factors that are used to assess characteristics of dysphagia, behavioral adaptations to living with dysphagia, and pain while swallowing accounted for 67% of the variation in patients’ assessment of disease severity. Based on statistical consideration and patient input, a 7-day recall period was selected. Highly active EoE, based on endoscopic and histologic findings, was associated with an increase in patient-assessed disease severity. In the validation study, the mean difference between patient assessment of EoE severity and PRO score was 0.13 (on a scale from 0 to 10). CONCLUSIONS We developed and validated an EoE scoring system based on 7 PRO items that assesses symptoms over a 7-day recall period. Clinicaltrials.gov number: NCT00939263. PMID

  1. Validity of Cognitive Load Measures in Simulation-Based Training: A Systematic Review.

    PubMed

    Naismith, Laura M; Cavalcanti, Rodrigo B

    2015-11-01

    Cognitive load theory (CLT) provides a rich framework to inform instructional design. Despite the applicability of CLT to simulation-based medical training, findings from multimedia learning have not been consistently replicated in this context. This lack of transferability may be related to issues in measuring cognitive load (CL) during simulation. The authors conducted a review of CLT studies across simulation training contexts to assess the validity evidence for different CL measures. PRISMA standards were followed. For 48 studies selected from a search of MEDLINE, EMBASE, PsycInfo, CINAHL, and ERIC databases, information was extracted about study aims, methods, validity evidence of measures, and findings. Studies were categorized on the basis of findings and prevalence of validity evidence collected, and statistical comparisons between measurement types and research domains were pursued. CL during simulation training has been measured in diverse populations including medical trainees, pilots, and university students. Most studies (71%; 34) used self-report measures; others included secondary task performance, physiological indices, and observer ratings. Correlations between CL and learning varied from positive to negative. Overall validity evidence for CL measures was low (mean score 1.55/5). Studies reporting greater validity evidence were more likely to report that high CL impaired learning. The authors found evidence that inconsistent correlations between CL and learning may be related to issues of validity in CL measures. Further research would benefit from rigorous documentation of validity and from triangulating measures of CL. This can better inform CLT instructional design for simulation-based medical training.

  2. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems

    PubMed Central

    Silva, Lenardo C.; Almeida, Hyggo O.; Perkusich, Angelo; Perkusich, Mirko

    2015-01-01

    Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage. PMID:26528982

  3. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems.

    PubMed

    Silva, Lenardo C; Almeida, Hyggo O; Perkusich, Angelo; Perkusich, Mirko

    2015-10-30

    Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage.

  4. Validation of the Evidence-Based Practice Process Assessment Scale

    ERIC Educational Resources Information Center

    Rubin, Allen; Parrish, Danielle E.

    2011-01-01

    Objective: This report describes the reliability, validity, and sensitivity of a scale that assesses practitioners' perceived familiarity with, attitudes of, and implementation of the evidence-based practice (EBP) process. Method: Social work practitioners and second-year master of social works (MSW) students (N = 511) were surveyed in four sites…

  5. 42 CFR 476.85 - Conclusive effect of QIO initial denial determinations and changes as a result of DRG validations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... determinations and changes as a result of DRG validations. 476.85 Section 476.85 Public Health CENTERS FOR... denial determinations and changes as a result of DRG validations. A QIO initial denial determination or change as a result of DRG validation is final and binding unless, in accordance with the procedures in...

  6. 42 CFR 476.85 - Conclusive effect of QIO initial denial determinations and changes as a result of DRG validations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... determinations and changes as a result of DRG validations. 476.85 Section 476.85 Public Health CENTERS FOR... denial determinations and changes as a result of DRG validations. A QIO initial denial determination or change as a result of DRG validation is final and binding unless, in accordance with the procedures in...

  7. 42 CFR 476.85 - Conclusive effect of QIO initial denial determinations and changes as a result of DRG validations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... determinations and changes as a result of DRG validations. 476.85 Section 476.85 Public Health CENTERS FOR... denial determinations and changes as a result of DRG validations. A QIO initial denial determination or change as a result of DRG validation is final and binding unless, in accordance with the procedures in...

  8. Is Validation Always Valid? Cross-Cultural Complexities of Standard-Based Instruments Migrating out of Their Context

    ERIC Educational Resources Information Center

    Pastori, Giulia; Pagani, Valentina

    2017-01-01

    The international application of standard-based measures to assess ECEC quality raises crucial questions concerning the cultural complexities and the problematic validity of instruments migrating out of their cultural cradle; nevertheless the topic has received only marginal attention in the literature. This paper, which aims to address this gap,…

  9. An experimental validation of a statistical-based damage detection approach.

    DOT National Transportation Integrated Search

    2011-01-01

    In this work, a previously-developed, statistical-based, damage-detection approach was validated for its ability to : autonomously detect damage in bridges. The damage-detection approach uses statistical differences in the actual and : predicted beha...

  10. Summarising and validating test accuracy results across multiple studies for use in clinical practice.

    PubMed

    Riley, Richard D; Ahmed, Ikhlaaq; Debray, Thomas P A; Willis, Brian H; Noordzij, J Pieter; Higgins, Julian P T; Deeks, Jonathan J

    2015-06-15

    Following a meta-analysis of test accuracy studies, the translation of summary results into clinical practice is potentially problematic. The sensitivity, specificity and positive (PPV) and negative (NPV) predictive values of a test may differ substantially from the average meta-analysis findings, because of heterogeneity. Clinicians thus need more guidance: given the meta-analysis, is a test likely to be useful in new populations, and if so, how should test results inform the probability of existing disease (for a diagnostic test) or future adverse outcome (for a prognostic test)? We propose ways to address this. Firstly, following a meta-analysis, we suggest deriving prediction intervals and probability statements about the potential accuracy of a test in a new population. Secondly, we suggest strategies on how clinicians should derive post-test probabilities (PPV and NPV) in a new population based on existing meta-analysis results and propose a cross-validation approach for examining and comparing their calibration performance. Application is made to two clinical examples. In the first example, the joint probability that both sensitivity and specificity will be >80% in a new population is just 0.19, because of a low sensitivity. However, the summary PPV of 0.97 is high and calibrates well in new populations, with a probability of 0.78 that the true PPV will be at least 0.95. In the second example, post-test probabilities calibrate better when tailored to the prevalence in the new population, with cross-validation revealing a probability of 0.97 that the observed NPV will be within 10% of the predicted NPV. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  11. Assessing impact of physical activity-based youth development programs: validation of the Life Skills Transfer Survey (LSTS).

    PubMed

    Weiss, Maureen R; Bolter, Nicole D; Kipp, Lindsay E

    2014-09-01

    A signature characteristic of positive youth development (PYD) programs is the opportunity to develop life skills, such as social, behavioral, and moral competencies, that can be generalized to domains beyond the immediate activity. Although context-specific instruments are available to assess developmental outcomes, a measure of life skills transfer would enable evaluation of PYD programs in successfully teaching skills that youth report using in other domains. The purpose of our studies was to develop and validate a measure of perceived life skills transfer, based on data collected with The First Tee, a physical activity-based PYD program. In 3 studies, we conducted a series of steps to provide content and construct validity and internal consistency reliability for the Life Skills Transfer Survey (LSTS), a measure of perceived life skills transfer. Study 1 provided content validity for the LSTS that included 8 life skills and 50 items. Study 2 revealed construct validity (structural validity) through a confirmatory factor analysis and convergent validity by correlating scores on the LSTS with scores on an assessment tool that measures a related construct. Study 3 offered additional construct validity by reassessing youth 1 year later and showing that scores during both time periods were invariant in factor pattern, loadings, and variances and covariances. Studies 2 and 3 demonstrated internal consistency reliability of the LSTS. RESULTS from 3 studies provide evidence of content and construct validity and internal consistency reliability for the LSTS, which can be used in evaluation research with youth development programs.

  12. The reliability and validity of the Complex Task Performance Assessment: A performance-based assessment of executive function.

    PubMed

    Wolf, Timothy J; Dahl, Abigail; Auen, Colleen; Doherty, Meghan

    2017-07-01

    The objective of this study was to evaluate the inter-rater reliability, test-retest reliability, concurrent validity, and discriminant validity of the Complex Task Performance Assessment (CTPA): an ecologically valid performance-based assessment of executive function. Community control participants (n = 20) and individuals with mild stroke (n = 14) participated in this study. All participants completed the CTPA and a battery of cognitive assessments at initial testing. The control participants completed the CTPA at two different times one week apart. The intra-class correlation coefficient (ICC) for inter-rater reliability for the total score on the CTPA was .991. The ICCs for all of the sub-scores of the CTPA were also high (.889-.977). The CTPA total score was significantly correlated to Condition 4 of the DKEFS Color-Word Interference Test (p = -.425), and the Wechsler Test of Adult Reading (p  = -.493). Finally, there were significant differences between control subjects and individuals with mild stroke on the total score of the CTPA (p = .007) and all sub-scores except interpretation failures and total items incorrect. These results are also consistent with other current executive function performance-based assessments and indicate that the CTPA is a reliable and valid performance-based measure of executive function.

  13. Are validated outcome measures used in distal radial fractures truly valid?

    PubMed Central

    Nienhuis, R. W.; Bhandari, M.; Goslings, J. C.; Poolman, R. W.; Scholtes, V. A. B.

    2016-01-01

    Objectives Patient-reported outcome measures (PROMs) are often used to evaluate the outcome of treatment in patients with distal radial fractures. Which PROM to select is often based on assessment of measurement properties, such as validity and reliability. Measurement properties are assessed in clinimetric studies, and results are often reviewed without considering the methodological quality of these studies. Our aim was to systematically review the methodological quality of clinimetric studies that evaluated measurement properties of PROMs used in patients with distal radial fractures, and to make recommendations for the selection of PROMs based on the level of evidence of each individual measurement property. Methods A systematic literature search was performed in PubMed, EMbase, CINAHL and PsycINFO databases to identify relevant clinimetric studies. Two reviewers independently assessed the methodological quality of the studies on measurement properties, using the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist. Level of evidence (strong / moderate / limited / lacking) for each measurement property per PROM was determined by combining the methodological quality and the results of the different clinimetric studies. Results In all, 19 out of 1508 identified unique studies were included, in which 12 PROMs were rated. The Patient-rated wrist evaluation (PRWE) and the Disabilities of Arm, Shoulder and Hand questionnaire (DASH) were evaluated on most measurement properties. The evidence for the PRWE is moderate that its reliability, validity (content and hypothesis testing), and responsiveness are good. The evidence is limited that its internal consistency and cross-cultural validity are good, and its measurement error is acceptable. There is no evidence for its structural and criterion validity. The evidence for the DASH is moderate that its responsiveness is good. The evidence is limited that its reliability and the

  14. 42 CFR 476.85 - Conclusive effect of QIO initial denial determinations and changes as a result of DRG validations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... determinations and changes as a result of DRG validations. 476.85 Section 476.85 Public Health CENTERS FOR... changes as a result of DRG validations. A QIO initial denial determination or change as a result of DRG validation is final and binding unless, in accordance with the procedures in part 473— (a) The initial denial...

  15. Initial validation of a web-based self-administered neuropsychological test battery for older adults and seniors.

    PubMed

    Hansen, Tor Ivar; Haferstrom, Elise Christina D; Brunner, Jan F; Lehn, Hanne; Håberg, Asta Kristine

    2015-01-01

    Computerized neuropsychological tests are effective in assessing different cognitive domains, but are often limited by the need of proprietary hardware and technical staff. Web-based tests can be more accessible and flexible. We aimed to investigate validity, effects of computer familiarity, education, and age, and the feasibility of a new web-based self-administered neuropsychological test battery (Memoro) in older adults and seniors. A total of 62 (37 female) participants (mean age 60.7 years) completed the Memoro web-based neuropsychological test battery and a traditional battery composed of similar tests intended to measure the same cognitive constructs. Participants were assessed on computer familiarity and how they experienced the two batteries. To properly test the factor structure of Memoro, an additional factor analysis in 218 individuals from the HUNT population was performed. Comparing Memoro to traditional tests, we observed good concurrent validity (r = .49-.63). The performance on the traditional and Memoro test battery was consistent, but differences in raw scores were observed with higher scores on verbal memory and lower in spatial memory in Memoro. Factor analysis indicated two factors: verbal and spatial memory. There were no correlations between test performance and computer familiarity after adjustment for age or age and education. Subjects reported that they preferred web-based testing as it allowed them to set their own pace, and they did not feel scrutinized by an administrator. Memoro showed good concurrent validity compared to neuropsychological tests measuring similar cognitive constructs. Based on the current results, Memoro appears to be a tool that can be used to assess cognitive function in older and senior adults. Further work is necessary to ascertain its validity and reliability.

  16. Development and validation of a food-based diet quality index for New Zealand adolescents

    PubMed Central

    2013-01-01

    Background As there is no population-specific, simple food-based diet index suitable for examination of diet quality in New Zealand (NZ) adolescents, there is a need to develop such a tool. Therefore, this study aimed to develop an adolescent-specific diet quality index based on dietary information sourced from a Food Questionnaire (FQ) and examine its validity relative to a four-day estimated food record (4DFR) obtained from a group of adolescents aged 14 to 18 years. Methods A diet quality index for NZ adolescents (NZDQI-A) was developed based on ‘Adequacy’ and ‘Variety’ of five food groups reflecting the New Zealand Food and Nutrition Guidelines for Healthy Adolescents. The NZDQI-A was scored from zero to 100, with a higher score reflecting a better diet quality. Forty-one adolescents (16 males, 25 females, aged 14–18 years) each completed the FQ and a 4DFR. The test-retest reliability of the FQ-derived NZDQI-A scores over a two-week period and the relative validity of the scores compared to the 4DFR were estimated using Pearson’s correlations. Construct validity was examined by comparing NZDQI-A scores against nutrient intakes obtained from the 4DFR. Results The NZDQI-A derived from the FQ showed good reliability (r = 0.65) and reasonable agreement with 4DFR in ranking participants by scores (r = 0.39). More than half of the participants were classified into the same thirds of scores while 10% were misclassified into the opposite thirds by the two methods. Higher NZDQI-A scores were also associated with lower total fat and saturated fat intakes and higher iron intakes. Conclusions Higher NZDQI-A scores were associated with more desirable fat and iron intakes. The scores derived from either FQ or 4DFR were comparable and reproducible when repeated within two weeks. The NZDQI-A is relatively valid and reliable in ranking diet quality in adolescents at a group level even in a small sample size. Further studies are required to test the

  17. A new method for assessing content validity in model-based creation and iteration of eHealth interventions.

    PubMed

    Kassam-Adams, Nancy; Marsac, Meghan L; Kohser, Kristen L; Kenardy, Justin A; March, Sonja; Winston, Flaura K

    2015-04-15

    The advent of eHealth interventions to address psychological concerns and health behaviors has created new opportunities, including the ability to optimize the effectiveness of intervention activities and then deliver these activities consistently to a large number of individuals in need. Given that eHealth interventions grounded in a well-delineated theoretical model for change are more likely to be effective and that eHealth interventions can be costly to develop, assuring the match of final intervention content and activities to the underlying model is a key step. We propose to apply the concept of "content validity" as a crucial checkpoint to evaluate the extent to which proposed intervention activities in an eHealth intervention program are valid (eg, relevant and likely to be effective) for the specific mechanism of change that each is intended to target and the intended target population for the intervention. The aims of this paper are to define content validity as it applies to model-based eHealth intervention development, to present a feasible method for assessing content validity in this context, and to describe the implementation of this new method during the development of a Web-based intervention for children. We designed a practical 5-step method for assessing content validity in eHealth interventions that includes defining key intervention targets, delineating intervention activity-target pairings, identifying experts and using a survey tool to gather expert ratings of the relevance of each activity to its intended target, its likely effectiveness in achieving the intended target, and its appropriateness with a specific intended audience, and then using quantitative and qualitative results to identify intervention activities that may need modification. We applied this method during our development of the Coping Coach Web-based intervention for school-age children. In the evaluation of Coping Coach content validity, 15 experts from five countries

  18. Keemei: cloud-based validation of tabular bioinformatics file formats in Google Sheets.

    PubMed

    Rideout, Jai Ram; Chase, John H; Bolyen, Evan; Ackermann, Gail; González, Antonio; Knight, Rob; Caporaso, J Gregory

    2016-06-13

    Bioinformatics software often requires human-generated tabular text files as input and has specific requirements for how those data are formatted. Users frequently manage these data in spreadsheet programs, which is convenient for researchers who are compiling the requisite information because the spreadsheet programs can easily be used on different platforms including laptops and tablets, and because they provide a familiar interface. It is increasingly common for many different researchers to be involved in compiling these data, including study coordinators, clinicians, lab technicians and bioinformaticians. As a result, many research groups are shifting toward using cloud-based spreadsheet programs, such as Google Sheets, which support the concurrent editing of a single spreadsheet by different users working on different platforms. Most of the researchers who enter data are not familiar with the formatting requirements of the bioinformatics programs that will be used, so validating and correcting file formats is often a bottleneck prior to beginning bioinformatics analysis. We present Keemei, a Google Sheets Add-on, for validating tabular files used in bioinformatics analyses. Keemei is available free of charge from Google's Chrome Web Store. Keemei can be installed and run on any web browser supported by Google Sheets. Keemei currently supports the validation of two widely used tabular bioinformatics formats, the Quantitative Insights into Microbial Ecology (QIIME) sample metadata mapping file format and the Spatially Referenced Genetic Data (SRGD) format, but is designed to easily support the addition of others. Keemei will save researchers time and frustration by providing a convenient interface for tabular bioinformatics file format validation. By allowing everyone involved with data entry for a project to easily validate their data, it will reduce the validation and formatting bottlenecks that are commonly encountered when human-generated data files are

  19. Groundwater Model Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process ofmore » stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of

  20. The VALiDATe29 MRI Based Multi-Channel Atlas of the Squirrel Monkey Brain.

    PubMed

    Schilling, Kurt G; Gao, Yurui; Stepniewska, Iwona; Wu, Tung-Lin; Wang, Feng; Landman, Bennett A; Gore, John C; Chen, Li Min; Anderson, Adam W

    2017-10-01

    We describe the development of the first digital atlas of the normal squirrel monkey brain and present the resulting product, VALiDATe29. The VALiDATe29 atlas is based on multiple types of magnetic resonance imaging (MRI) contrast acquired on 29 squirrel monkeys, and is created using unbiased, nonlinear registration techniques, resulting in a population-averaged stereotaxic coordinate system. The atlas consists of multiple anatomical templates (proton density, T1, and T2* weighted), diffusion MRI templates (fractional anisotropy and mean diffusivity), and ex vivo templates (fractional anisotropy and a structural MRI). In addition, the templates are combined with histologically defined cortical labels, and diffusion tractography defined white matter labels. The combination of intensity templates and image segmentations make this atlas suitable for the fundamental atlas applications of spatial normalization and label propagation. Together, this atlas facilitates 3D anatomical localization and region of interest delineation, and enables comparisons of experimental data across different subjects or across different experimental conditions. This article describes the atlas creation and its contents, and demonstrates the use of the VALiDATe29 atlas in typical applications. The atlas is freely available to the scientific community.

  1. Validating agent based models through virtual worlds.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lakkaraju, Kiran; Whetzel, Jonathan H.; Lee, Jina

    As the US continues its vigilance against distributed, embedded threats, understanding the political and social structure of these groups becomes paramount for predicting and dis- rupting their attacks. Agent-based models (ABMs) serve as a powerful tool to study these groups. While the popularity of social network tools (e.g., Facebook, Twitter) has provided extensive communication data, there is a lack of ne-grained behavioral data with which to inform and validate existing ABMs. Virtual worlds, in particular massively multiplayer online games (MMOG), where large numbers of people interact within a complex environ- ment for long periods of time provide an alternative sourcemore » of data. These environments provide a rich social environment where players engage in a variety of activities observed between real-world groups: collaborating and/or competing with other groups, conducting battles for scarce resources, and trading in a market economy. Strategies employed by player groups surprisingly re ect those seen in present-day con icts, where players use diplomacy or espionage as their means for accomplishing their goals. In this project, we propose to address the need for ne-grained behavioral data by acquiring and analyzing game data a commercial MMOG, referred to within this report as Game X. The goals of this research were: (1) devising toolsets for analyzing virtual world data to better inform the rules that govern a social ABM and (2) exploring how virtual worlds could serve as a source of data to validate ABMs established for analogous real-world phenomena. During this research, we studied certain patterns of group behavior to compliment social modeling e orts where a signi cant lack of detailed examples of observed phenomena exists. This report outlines our work examining group behaviors that underly what we have termed the Expression-To-Action (E2A) problem: determining the changes in social contact that lead individuals/groups to engage in a particular

  2. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  3. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  4. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  5. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  6. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  7. Evaluation of MuSyQ land surface albedo based on LAnd surface Parameters VAlidation System (LAPVAS)

    NASA Astrophysics Data System (ADS)

    Dou, B.; Wen, J.; Xinwen, L.; Zhiming, F.; Wu, S.; Zhang, Y.

    2016-12-01

    satellite derived Land surface albedo is an essential climate variable which controls the earth energy budget and it can be used in applications such as climate change, hydrology, and numerical weather prediction. However, the accuracy and uncertainty of surface albedo products should be evaluated with a reliable reference truth data prior to applications. A new comprehensive and systemic project of china, called the Remote Sensing Application Network (CRSAN), has been launched recent years. Two subjects of this project is developing a Multi-source data Synergized Quantitative Remote Sensin g Production System ( MuSyQ ) and a Web-based validation system named LAnd surface remote sensing Product VAlidation System (LAPVAS) , which aims to generate a quantitative remote sensing product for ecosystem and environmental monitoring and validate them with a reference validation data and a standard validation system, respectively. Land surface BRDF/albedo is one of product datasets of MuSyQ which has a pentad period with 1km spatial resolution and is derived by Multi-sensor Combined BRDF Inversion ( MCBI ) Model. In this MuSyQ albedo evaluation, a multi-validation strategy is implemented by LAPVAS, including directly and multi-scale validation with field measured albedo and cross validation with MODIS albedo product with different land cover. The results reveal that MuSyQ albedo data with a 5-day temporal resolution is in higher sensibility and accuracy during land cover change period, e.g. snowing. But results without regard to snow or changed land cover, MuSyQ albedo generally is in similar accuracy with MODIS albedo and meet the climate modeling requirement of an absolute accuracy of 0.05.

  8. Systematic development and validation of a theory-based questionnaire to assess toddler feeding.

    PubMed

    Hurley, Kristen M; Pepper, M Reese; Candelaria, Margo; Wang, Yan; Caulfield, Laura E; Latta, Laura; Hager, Erin R; Black, Maureen M

    2013-12-01

    This paper describes the development and validation of a 27-item caregiver-reported questionnaire on toddler feeding. The development of the Toddler Feeding Behavior Questionnaire was based on a theory of interactive feeding that incorporates caregivers' responses to concerns about their children's dietary intake, appetite, size, and behaviors rather than relying exclusively on caregiver actions. Content validity included review by an expert panel (n = 7) and testing in a pilot sample (n = 105) of low-income mothers of toddlers. Construct validity and reliability were assessed among a second sample of low-income mothers of predominately African-American (70%) toddlers aged 12-32 mo (n = 297) participating in the baseline evaluation of a toddler overweight prevention study. Internal consistency (Cronbach's α: 0.64-0.87) and test-retest (0.57-0.88) reliability were acceptable for most constructs. Exploratory and confirmatory factor analyses revealed 5 theoretically derived constructs of feeding: responsive, forceful/pressuring, restrictive, indulgent, and uninvolved (root mean square error of approximation = 0.047, comparative fit index = 0.90, standardized root mean square residual = 0.06). Statistically significant (P < 0.05) convergent validity results further validated the scale, confirming established relations between feeding behaviors, toddler overweight status, perceived toddler fussiness, and maternal mental health. The Toddler Feeding Behavior Questionnaire adds to the field by providing a brief instrument that can be administered in 5 min to examine how caregiver-reported feeding behaviors relate to toddler health and behavior.

  9. Fisk-based criteria to support validation of detection methods for drinking water and air.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDonell, M.; Bhattacharyya, M.; Finster, M.

    2009-02-18

    This report was prepared to support the validation of analytical methods for threat contaminants under the U.S. Environmental Protection Agency (EPA) National Homeland Security Research Center (NHSRC) program. It is designed to serve as a resource for certain applications of benchmark and fate information for homeland security threat contaminants. The report identifies risk-based criteria from existing health benchmarks for drinking water and air for potential use as validation targets. The focus is on benchmarks for chronic public exposures. The priority sources are standard EPA concentration limits for drinking water and air, along with oral and inhalation toxicity values. Many contaminantsmore » identified as homeland security threats to drinking water or air would convert to other chemicals within minutes to hours of being released. For this reason, a fate analysis has been performed to identify potential transformation products and removal half-lives in air and water so appropriate forms can be targeted for detection over time. The risk-based criteria presented in this report to frame method validation are expected to be lower than actual operational targets based on realistic exposures following a release. Note that many target criteria provided in this report are taken from available benchmarks without assessing the underlying toxicological details. That is, although the relevance of the chemical form and analogues are evaluated, the toxicological interpretations and extrapolations conducted by the authoring organizations are not. It is also important to emphasize that such targets in the current analysis are not health-based advisory levels to guide homeland security responses. This integrated evaluation of chronic public benchmarks and contaminant fate has identified more than 200 risk-based criteria as method validation targets across numerous contaminants and fate products in drinking water and air combined. The gap in directly applicable values is

  10. Methodologies for validating ray-based forward model using finite element method in ultrasonic array data simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Nixon, Andrew; Barber, Tom; Budyn, Nicolas; Bevan, Rhodri; Croxford, Anthony; Wilcox, Paul

    2018-04-01

    In this paper, a methodology of using finite element (FE) model to validate a ray-based model in the simulation of full matrix capture (FMC) ultrasonic array data set is proposed. The overall aim is to separate signal contributions from different interactions in FE results for easier comparing each individual component in the ray-based model results. This is achieved by combining the results from multiple FE models of the system of interest that include progressively more geometrical features while preserving the same mesh structure. It is shown that the proposed techniques allow the interactions from a large number of different ray-paths to be isolated in FE results and compared directly to the results from a ray-based forward model.

  11. 42 CFR 476.94 - Notice of QIO initial denial determination and changes as a result of a DRG validation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... changes as a result of a DRG validation. 476.94 Section 476.94 Public Health CENTERS FOR MEDICARE... changes as a result of a DRG validation. (a) Notice of initial denial determination—(1) Parties to be... working days of identification; (vi) For retrospective review, (excluding DRG validation and post...

  12. Reliability and Validity of an Internet-based Questionnaire Measuring Lifetime Physical Activity

    PubMed Central

    De Vera, Mary A.; Ratzlaff, Charles; Doerfling, Paul; Kopec, Jacek

    2010-01-01

    Lifetime exposure to physical activity is an important construct for evaluating associations between physical activity and disease outcomes, given the long induction periods in many chronic diseases. The authors' objective in this study was to evaluate the measurement properties of the Lifetime Physical Activity Questionnaire (L-PAQ), a novel Internet-based, self-administered instrument measuring lifetime physical activity, among Canadian men and women in 2005–2006. Reliability was examined using a test-retest study. Validity was examined in a 2-part study consisting of 1) comparisons with previously validated instruments measuring similar constructs, the Lifetime Total Physical Activity Questionnaire (LT-PAQ) and the Chasan-Taber Physical Activity Questionnaire (CT-PAQ), and 2) a priori hypothesis tests of constructs measured by the L-PAQ. The L-PAQ demonstrated good reliability, with intraclass correlation coefficients ranging from 0.67 (household activity) to 0.89 (sports/recreation). Comparison between the L-PAQ and the LT-PAQ resulted in Spearman correlation coefficients ranging from 0.41 (total activity) to 0.71 (household activity); comparison between the L-PAQ and the CT-PAQ yielded coefficients of 0.58 (sports/recreation), 0.56 (household activity), and 0.50 (total activity). L-PAQ validity was further supported by observed relations between the L-PAQ and sociodemographic variables, consistent with a priori hypotheses. Overall, the L-PAQ is a useful instrument for assessing multiple domains of lifetime physical activity with acceptable reliability and validity. PMID:20876666

  13. Reliability and validity of an internet-based questionnaire measuring lifetime physical activity.

    PubMed

    De Vera, Mary A; Ratzlaff, Charles; Doerfling, Paul; Kopec, Jacek

    2010-11-15

    Lifetime exposure to physical activity is an important construct for evaluating associations between physical activity and disease outcomes, given the long induction periods in many chronic diseases. The authors' objective in this study was to evaluate the measurement properties of the Lifetime Physical Activity Questionnaire (L-PAQ), a novel Internet-based, self-administered instrument measuring lifetime physical activity, among Canadian men and women in 2005-2006. Reliability was examined using a test-retest study. Validity was examined in a 2-part study consisting of 1) comparisons with previously validated instruments measuring similar constructs, the Lifetime Total Physical Activity Questionnaire (LT-PAQ) and the Chasan-Taber Physical Activity Questionnaire (CT-PAQ), and 2) a priori hypothesis tests of constructs measured by the L-PAQ. The L-PAQ demonstrated good reliability, with intraclass correlation coefficients ranging from 0.67 (household activity) to 0.89 (sports/recreation). Comparison between the L-PAQ and the LT-PAQ resulted in Spearman correlation coefficients ranging from 0.41 (total activity) to 0.71 (household activity); comparison between the L-PAQ and the CT-PAQ yielded coefficients of 0.58 (sports/recreation), 0.56 (household activity), and 0.50 (total activity). L-PAQ validity was further supported by observed relations between the L-PAQ and sociodemographic variables, consistent with a priori hypotheses. Overall, the L-PAQ is a useful instrument for assessing multiple domains of lifetime physical activity with acceptable reliability and validity.

  14. Complete Statistical Survey Results of 1982 Texas Competency Validation Project.

    ERIC Educational Resources Information Center

    Rogers, Sandra K.; Dahlberg, Maurine F.

    This report documents a project to develop current statewide validated competencies for auto mechanics, diesel mechanics, welding, office occupations, and printing. Section 1 describes the four steps used in the current competency validation project and provides a standardized process for conducting future studies at the local or statewide level.…

  15. Monte Carol-based validation of neutronic methodology for EBR-II analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liaw, J.R.; Finck, P.J.

    1993-01-01

    The continuous-energy Monte Carlo code VIM (Ref. 1) has been validated extensively over the years against fast critical experiments and other neutronic analysis codes. A high degree of confidence in VIM for predicting reactor physics parameters has been firmly established. This paper presents a numerical validation of two conventional multigroup neutronic analysis codes, DIF3D (Ref. 4) and VARIANT (Ref. 5), against VIM for two Experimental Breeder Reactor II (EBR-II) core loadings in detailed three-dimensional hexagonal-z geometry. The DIF3D code is based on nodal diffusion theory, and it is used in calculations for day-today reactor operations, whereas the VARIANT code ismore » based on nodal transport theory and is used with increasing frequency for specific applications. Both DIF3D and VARIANT rely on multigroup cross sections generated from ENDF/B-V by the ETOE-2/MC[sup 2]-II/SDX (Ref. 6) code package. Hence, this study also validates the multigroup cross-section processing methodology against the continuous-energy approach used in VIM.« less

  16. [Reliability and validity of the Chinese version on Comprehensive Scores for Financial Toxicity based on the patient-reported outcome measures].

    PubMed

    Yu, H H; Bi, X; Liu, Y Y

    2017-08-10

    Objective: To evaluate the reliability and validity of the Chinese version on comprehensive scores for financial toxicity (COST), based on the patient-reported outcome measures. Methods: A total of 118 cancer patients were face-to-face interviewed by well-trained investigators. Cronbach's α and Pearson correlation coefficient were used to evaluate reliability. Content validity index (CVI) and exploratory factor analysis (EFA) were used to evaluate the content validity and construct validity, respectively. Results: The Cronbach's α coefficient appeared as 0.889 for the whole questionnaire, with the results of test-retest were between 0.77 and 0.98. Scale-content validity index (S-CVI) appeared as 0.82, with item-content validity index (I-CVI) between 0.83 and 1.00. Two components were extracted from the Exploratory factor analysis, with cumulative rate as 68.04% and loading>0.60 on every item. Conclusion: The Chinese version of COST scale showed high reliability and good validity, thus can be applied to assess the financial situation in cancer patients.

  17. Clinical Validity of the ADI-R in a US-Based Latino Population.

    PubMed

    Vanegas, Sandra B; Magaña, Sandra; Morales, Miguel; McNamara, Ellyn

    2016-05-01

    The Autism Diagnostic Interview-Revised (ADI-R) has been validated as a tool to aid in the diagnosis of Autism; however, given the growing diversity in the United States, the ADI-R must be validated for different languages and cultures. This study evaluates the validity of the ADI-R in a US-based Latino, Spanish-speaking population of 50 children and adolescents with ASD and developmental disability. Sensitivity and specificity of the ADI-R as a diagnostic tool were moderate, but lower than previously reported values. Validity of the social reciprocity and restrictive and repetitive behaviors domains was high, but low in the communication domain. Findings suggest that language discordance between caregiver and child may influence reporting of communication symptoms and contribute to lower sensitivity and specificity.

  18. Validation of the Learning Progression-based Assessment of Modern Genetics in a college context

    NASA Astrophysics Data System (ADS)

    Todd, Amber; Romine, William L.

    2016-07-01

    Building upon a methodologically diverse research foundation, we adapted and validated the Learning Progression-based Assessment of Modern Genetics (LPA-MG) for college students' knowledge of the domain. Toward collecting valid learning progression-based measures in a college majors context, we redeveloped and content validated a majority of a previous version of the LPA-MG which was developed for high school students. Using a Rasch model calibrated on 316 students from 2 sections of majors introductory biology, we demonstrate the validity of this version and describe how college students' ideas of modern genetics are likely to change as the students progress from low to high understanding. We then utilize these findings to build theory around the connections college students at different levels of understanding make within and across the many ideas within the domain.

  19. 42 CFR 476.94 - Notice of QIO initial denial determination and changes as a result of a DRG validation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... changes as a result of a DRG validation. 476.94 Section 476.94 Public Health CENTERS FOR MEDICARE... changes as a result of a DRG validation. (a) Notice of initial denial determination—(1) Parties to be... retrospective review, (excluding DRG validation and post procedure review), within 3 working days of the initial...

  20. 42 CFR 476.94 - Notice of QIO initial denial determination and changes as a result of a DRG validation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... changes as a result of a DRG validation. 476.94 Section 476.94 Public Health CENTERS FOR MEDICARE... changes as a result of a DRG validation. (a) Notice of initial denial determination—(1) Parties to be... retrospective review, (excluding DRG validation and post procedure review), within 3 working days of the initial...

  1. Development and Validation of a Primary Care-Based Family Health History and Decision Support Program (MeTree)

    PubMed Central

    Orlando, Lori A.; Buchanan, Adam H.; Hahn, Susan E.; Christianson, Carol A.; Powell, Karen P.; Skinner, Celette Sugg; Chesnut, Blair; Blach, Colette; Due, Barbara; Ginsburg, Geoffrey S.; Henrich, Vincent C.

    2016-01-01

    INTRODUCTION Family health history is a strong predictor of disease risk. To reduce the morbidity and mortality of many chronic diseases, risk-stratified evidence-based guidelines strongly encourage the collection and synthesis of family health history to guide selection of primary prevention strategies. However, the collection and synthesis of such information is not well integrated into clinical practice. To address barriers to collection and use of family health histories, the Genomedical Connection developed and validated MeTree, a Web-based, patient-facing family health history collection and clinical decision support tool. MeTree is designed for integration into primary care practices as part of the genomic medicine model for primary care. METHODS We describe the guiding principles, operational characteristics, algorithm development, and coding used to develop MeTree. Validation was performed through stakeholder cognitive interviewing, a genetic counseling pilot program, and clinical practice pilot programs in 2 community-based primary care clinics. RESULTS Stakeholder feedback resulted in changes to MeTree’s interface and changes to the phrasing of clinical decision support documents. The pilot studies resulted in the identification and correction of coding errors and the reformatting of clinical decision support documents. MeTree’s strengths in comparison with other tools are its seamless integration into clinical practice and its provision of action-oriented recommendations guided by providers’ needs. LIMITATIONS The tool was validated in a small cohort. CONCLUSION MeTree can be integrated into primary care practices to help providers collect and synthesize family health history information from patients with the goal of improving adherence to risk-stratified evidence-based guidelines. PMID:24044145

  2. Accuracy assessment/validation methodology and results of 2010–11 land-cover/land-use data for Pools 13, 26, La Grange, and Open River South, Upper Mississippi River System

    USGS Publications Warehouse

    Jakusz, J.W.; Dieck, J.J.; Langrehr, H.A.; Ruhser, J.J.; Lubinski, S.J.

    2016-01-11

    Similar to an AA, validation involves generating random points based on the total area for each map class. However, instead of collecting field data, two or three individuals not involved with the photo-interpretative mapping separately review each of the points onscreen and record a best-fit vegetation type(s) for each site. Once the individual analyses are complete, results are joined together and a comparative analysis is performed. The objective of this initial analysis is to identify areas where the validation results were in agreement (matches) and areas where validation results were in disagreement (mismatches). The two or three individuals then perform an analysis, looking at each mismatched site, and agree upon a final validation class. (If two vegetation types at a specific site appear to be equally prevalent, the validation team is permitted to assign the site two best-fit vegetation types.) Following the validation team’s comparative analysis of vegetation assignments, the data are entered into a database and compared to the mappers’ vegetation assignments. Agreements and disagreements between the map and validation classes are identified, and a contingency table is produced. This document presents the AA processes/results for Pools 13 and La Grange, as well as the validation process/results for Pools 13 and 26 and Open River South.

  3. Development and validation of science, technology, engineering and mathematics (STEM) based instructional material

    NASA Astrophysics Data System (ADS)

    Gustiani, Ineu; Widodo, Ari; Suwarma, Irma Rahma

    2017-05-01

    This study is intended to examine the development and validation of simple machines instructional material that developed based on Science, Technology, Engineering and Mathematics (STEM) framework that provides guidance to help students learn and practice for real life and enable individuals to use knowledge and skills they need to be an informed citizen. Sample of this study consist of one class of 8th grader at a junior secondary school in Bandung, Indonesia. To measure student learning, a pre-test and post-test were given before and after implementation of the STEM based instructional material. In addition, a questionnaire of readability was given to examine the clarity and difficulty level of each page of instructional material. A questionnaire of students' response towards instructional material given to students and teachers at the end of instructional material reading session to measure layout aspects, content aspects and utility aspects of instructional material for being used in the junior secondary school classroom setting. The results show that readability aspect and students' response towards STEM based instructional material of STEM based instructional material is categorized as very high. Pretest and posttest responses revealed that students retained significant amounts information upon completion of the STEM instructional material. Student overall learning gain is 0.67 which is categorized as moderate. In summary, STEM based instructional material that was developed is valid enough to be used as educational materials necessary for conducting effective STEM education.

  4. A Rasch-Based Validation of the Vocabulary Size Test

    ERIC Educational Resources Information Center

    Beglar, David

    2010-01-01

    The primary purpose of this study was to provide preliminary validity evidence for a 140-item form of the Vocabulary Size Test, which is designed to measure written receptive knowledge of the first 14,000 words of English. Nineteen native speakers of English and 178 native speakers of Japanese participated in the study. Analyses based on the Rasch…

  5. Preparation, validation and user-testing of pictogram-based patient information leaflets for tuberculosis.

    PubMed

    Shrestha, Anmol; Rajesh, V; Dessai, Sneha Shamrao; Stanly, Sharon Mary; Mateti, Uday Venkat

    2018-05-25

    Patient education is of paramount importance with regard to the condition of the disease and the treatment given besides lifestyle remodelling in order to get the desired therapeutic outcome. When verbal information is provided to the patients, they often tend to forget it. Pictorial aids or pictograms, as they are commonly known, are tools that are widely used for imparting knowledge to the patients. The aim of the study is to prepare and validate a Pictogram-based Patient Information Leaflet (P-PILs) on Tuberculosis (TB). P-PILs have been prepared from tertiary, secondary and primary sources. The knowledge-based questions are prepared with respect to the P-PILs. The baseline knowledge of the volunteers and patients have been analyzed before administering the P-PILs by using the validated questionnaire. The post-knowledge of the volunteers and patients has been analyzed after administering the P-PILs (20 min) by using the same questionnaire and the user-opinion has also been obtained at the end. The study results show that the mean scores of the overall user-testing knowledge assessment are found to have improved significantly from the pre-P-PILs administration score of 62.67 to the post-P-PILs administration score of 91. The overall user-opinion about the P-PILs has been found to be good (75%) followed by average (25%). The present study shows that there is significant improvement in the knowledge levels of the patients and volunteers after reading the validated leaflets. The Pictogram-based Patient Information Leaflets are found to be an effective educational tool for TB patients. Copyright © 2018. Published by Elsevier Ltd.

  6. Validation of a new method for finding the rotational axes of the knee using both marker-based roentgen stereophotogrammetric analysis and 3D video-based motion analysis for kinematic measurements.

    PubMed

    Roland, Michelle; Hull, M L; Howell, S M

    2011-05-01

    In a previous paper, we reported the virtual axis finder, which is a new method for finding the rotational axes of the knee. The virtual axis finder was validated through simulations that were subject to limitations. Hence, the objective of the present study was to perform a mechanical validation with two measurement modalities: 3D video-based motion analysis and marker-based roentgen stereophotogrammetric analysis (RSA). A two rotational axis mechanism was developed, which simulated internal-external (or longitudinal) and flexion-extension (FE) rotations. The actual axes of rotation were known with respect to motion analysis and RSA markers within ± 0.0006 deg and ± 0.036 mm and ± 0.0001 deg and ± 0.016 mm, respectively. The orientation and position root mean squared errors for identifying the longitudinal rotation (LR) and FE axes with video-based motion analysis (0.26 deg, 0.28 m, 0.36 deg, and 0.25 mm, respectively) were smaller than with RSA (1.04 deg, 0.84 mm, 0.82 deg, and 0.32 mm, respectively). The random error or precision in the orientation and position was significantly better (p=0.01 and p=0.02, respectively) in identifying the LR axis with video-based motion analysis (0.23 deg and 0.24 mm) than with RSA (0.95 deg and 0.76 mm). There was no significant difference in the bias errors between measurement modalities. In comparing the mechanical validations to virtual validations, the virtual validations produced comparable errors to those of the mechanical validation. The only significant difference between the errors of the mechanical and virtual validations was the precision in the position of the LR axis while simulating video-based motion analysis (0.24 mm and 0.78 mm, p=0.019). These results indicate that video-based motion analysis with the equipment used in this study is the superior measurement modality for use with the virtual axis finder but both measurement modalities produce satisfactory results. The lack of significant differences between

  7. Code-based Diagnostic Algorithms for Idiopathic Pulmonary Fibrosis. Case Validation and Improvement.

    PubMed

    Ley, Brett; Urbania, Thomas; Husson, Gail; Vittinghoff, Eric; Brush, David R; Eisner, Mark D; Iribarren, Carlos; Collard, Harold R

    2017-06-01

    Population-based studies of idiopathic pulmonary fibrosis (IPF) in the United States have been limited by reliance on diagnostic code-based algorithms that lack clinical validation. To validate a well-accepted International Classification of Diseases, Ninth Revision, code-based algorithm for IPF using patient-level information and to develop a modified algorithm for IPF with enhanced predictive value. The traditional IPF algorithm was used to identify potential cases of IPF in the Kaiser Permanente Northern California adult population from 2000 to 2014. Incidence and prevalence were determined overall and by age, sex, and race/ethnicity. A validation subset of cases (n = 150) underwent expert medical record and chest computed tomography review. A modified IPF algorithm was then derived and validated to optimize positive predictive value. From 2000 to 2014, the traditional IPF algorithm identified 2,608 cases among 5,389,627 at-risk adults in the Kaiser Permanente Northern California population. Annual incidence was 6.8/100,000 person-years (95% confidence interval [CI], 6.1-7.7) and was higher in patients with older age, male sex, and white race. The positive predictive value of the IPF algorithm was only 42.2% (95% CI, 30.6 to 54.6%); sensitivity was 55.6% (95% CI, 21.2 to 86.3%). The corrected incidence was estimated at 5.6/100,000 person-years (95% CI, 2.6-10.3). A modified IPF algorithm had improved positive predictive value but reduced sensitivity compared with the traditional algorithm. A well-accepted International Classification of Diseases, Ninth Revision, code-based IPF algorithm performs poorly, falsely classifying many non-IPF cases as IPF and missing a substantial proportion of IPF cases. A modification of the IPF algorithm may be useful for future population-based studies of IPF.

  8. 20 CFR 404.346 - Your relationship as wife, husband, widow, or widower based upon a deemed valid marriage.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... widower based upon a deemed valid marriage. 404.346 Section 404.346 Employees' Benefits SOCIAL SECURITY... relationship as wife, husband, widow, or widower based upon a deemed valid marriage. (a) General. If your... explained in § 404.345, you may be eligible for benefits based upon a deemed valid marriage. You will be...

  9. Voxel based morphometry in optical coherence tomography: validation and core findings

    NASA Astrophysics Data System (ADS)

    Antony, Bhavna J.; Chen, Min; Carass, Aaron; Jedynak, Bruno M.; Al-Louzi, Omar; Solomon, Sharon D.; Saidha, Shiv; Calabresi, Peter A.; Prince, Jerry L.

    2016-03-01

    Optical coherence tomography (OCT) of the human retina is now becoming established as an important modality for the detection and tracking of various ocular diseases. Voxel based morphometry (VBM) is a long standing neuroimaging analysis technique that allows for the exploration of the regional differences in the brain. There has been limited work done in developing registration based methods for OCT, which has hampered the advancement of VBM analyses in OCT based population studies. Following on from our recent development of an OCT registration method, we explore the potential benefits of VBM analysis in cohorts of healthy controls (HCs) and multiple sclerosis (MS) patients. Specifically, we validate the stability of VBM analysis in two pools of HCs showing no significant difference between the two populations. Additionally, we also present a retrospective study of age and sex matched HCs and relapsing remitting MS patients, demonstrating results consistent with the reported literature while providing insight into the retinal changes associated with this MS subtype.

  10. The Students' Perceptions of School Success Promoting Strategies Inventory (SPSI): development and validity evidence based studies.

    PubMed

    Moreira, Paulo A S; Oliveira, João Tiago; Dias, Paulo; Vaz, Filipa Machado; Torres-Oliveira, Isabel

    2014-08-04

    Students' perceptions about school success promotion strategies are of great importance for schools, as they are an indicator of how students perceive the school success promotion strategies. The objective of this study was to develop and analyze the validity evidence based of The Students' Perceptions of School Success Promoting Strategies Inventory (SPSI), which assesses both individual students' perceptions of their school success promoting strategies, and dimensions of school quality. A structure of 7 related factors was found, which showed good adjustment indices in two additional different samples, suggesting that this is a well-fitting multi-group model (p < .001). All scales presented good reliability values. Schools with good academic results registered higher values in Career development, Active learning, Proximity, Educational Technologies and Extra-curricular activities (p < .05). SPSI showed to be adequate to measure within-schools (students within schools) dimensions of school success. In addition, there is preliminary evidence for its adequacy for measuring school success promotion dimensions between schools for 4 dimensions. This study supports the validity evidence based of the SPSI (validity evidence based on test content, on internal structure, on relations to other variables and on consequences of testing). Future studies should test for within- and between-level variance in a bigger sample of schools.

  11. Validation of Ground-based Optical Estimates of Auroral Electron Precipitation Energy Deposition

    NASA Astrophysics Data System (ADS)

    Hampton, D. L.; Grubbs, G. A., II; Conde, M.; Lynch, K. A.; Michell, R.; Zettergren, M. D.; Samara, M.; Ahrns, M. J.

    2017-12-01

    One of the major energy inputs into the high latitude ionosphere and mesosphere is auroral electron precipitation. Not only does the kinetic energy get deposited, the ensuing ionization in the E and F-region ionosphere modulates parallel and horizontal currents that can dissipate in the form of Joule heating. Global models to simulate these interactions typically use electron precipitation models that produce a poor representation of the spatial and temporal complexity of auroral activity as observed from the ground. This is largely due to these precipitation models being based on averages of multiple satellite overpasses separated by periods much longer than typical auroral feature durations. With the development of regional and continental observing networks (e.g. THEMIS ASI), the possibility of ground-based optical observations producing quantitative estimates of energy deposition with temporal and spatial scales comparable to those known to be exhibited in auroral activity become a real possibility. Like empirical precipitation models based on satellite overpasses such optics-based estimates are subject to assumptions and uncertainties, and therefore require validation. Three recent sounding rocket missions offer such an opportunity. The MICA (2012), GREECE (2014) and Isinglass (2017) missions involved detailed ground based observations of auroral arcs simultaneously with extensive on-board instrumentation. These have afforded an opportunity to examine the results of three optical methods of determining auroral electron energy flux, namely 1) ratio of auroral emissions, 2) green line temperature vs. emission altitude, and 3) parametric estimates using white-light images. We present comparisons from all three methods for all three missions and summarize the temporal and spatial scales and coverage over which each is valid.

  12. Development and initial validation of a cognitive-based work-nonwork conflict scale.

    PubMed

    Ezzedeen, Souha R; Swiercz, Paul M

    2007-06-01

    Current research related to work and life outside work specifies three types of work-nonwork conflict: time, strain, and behavior-based. Overlooked in these models is a cognitive-based type of conflict whereby individuals experience work-nonwork conflict from cognitive preoccupation with work. Four studies on six different groups (N=549) were undertaken to develop and validate an initial measure of this construct. Structural equation modeling confirmed a two-factor, nine-item scale. Hypotheses regarding cognitive-based conflict's relationship with life satisfaction, work involvement, work-nonwork conflict, and work hours were supported. The relationship with knowledge work was partially supported in that only the cognitive dimension of cognitive-based conflict was related to extent of knowledge work. Hypotheses regarding cognitive-based conflict's relationship with family demands were rejected in that the cognitive dimension correlated positively rather than negatively with number of dependent children and perceived family demands. The study provides encouraging preliminary evidence of scale validity.

  13. HIV serostatus disclosure: development and validation of indicators considering target and modality. Results from a community-based research in 5 countries.

    PubMed

    Préau, Marie; Beaulieu-Prévost, Dominic; Henry, Emilie; Bernier, Adeline; Veillette-Bourbeau, Ludivine; Otis, Joanne

    2015-12-01

    HIV serostatus disclosure is a complex challenge for persons living with HIV (PLHIV). Despite its beneficial effects, it can also lead to stigmatization and rejection. The current lack of multi-dimensional measurement tools impede an in-depth understanding of the dynamic of disclosure. To develop and validate complex measures of serostatus disclosure. This international community based research study was performed by joint research teams (researchers/community based organizations (CBO)) in five countries (Democratic Republic of the Congo, Ecuador, Mali, Morocco and Romania). A convenience sample of 1500 people living with HIV (PLHIV) in contact with local CBO were recruited in 2011 (300 in each country). Face-to-face interviews were performed using a 125-item questionnaire covering HIV status disclosure to 23 potential disclosure targets and related issues (including personal history with HIV, people's reaction to disclosure, sexuality). A principal component analysis and a hierarchical cluster analysis were performed, in order to identify the main components of HIV disclosure, create measures and classify participants into profiles. Patterns of disclosure were summarized using two main measures: direct and indirect disclosure. Disclosure to sexual partners, whether steady or not, was different from patterns of disclosure to other targets. Among the participants, three profiles emerged - labelled Restricted disclosure, Mainly indirect disclosure and Mainly direct disclosure, respectively representing 61%, 13% and 26% of the total sample. The profiles were associated with different aspects of PLHIV's lives, including self-efficacy, functional limitations and social exclusion. Patterns varied across the five studied countries. Results suggest that multi-dimensional constructs should be used to measure disclosure in order to improve understanding of the disclosure process. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Trunk-acceleration based assessment of gait parameters in older persons: a comparison of reliability and validity of four inverted pendulum based estimations.

    PubMed

    Zijlstra, Agnes; Zijlstra, Wiebren

    2013-09-01

    Inverted pendulum (IP) models of human walking allow for wearable motion-sensor based estimations of spatio-temporal gait parameters during unconstrained walking in daily-life conditions. At present it is unclear to what extent different IP based estimations yield different results, and reliability and validity have not been investigated in older persons without a specific medical condition. The aim of this study was to compare reliability and validity of four different IP based estimations of mean step length in independent-living older persons. Participants were assessed twice and walked at different speeds while wearing a tri-axial accelerometer at the lower back. For all step-length estimators, test-retest intra-class correlations approached or were above 0.90. Intra-class correlations with reference step length were above 0.92 with a mean error of 0.0 cm when (1) multiplying the estimated center-of-mass displacement during a step by an individual correction factor in a simple IP model, or (2) adding an individual constant for bipedal stance displacement to the estimated displacement during single stance in a 2-phase IP model. When applying generic corrections or constants in all subjects (i.e. multiplication by 1.25, or adding 75% of foot length), correlations were above 0.75 with a mean error of respectively 2.0 and 1.2 cm. Although the results indicate that an individual adjustment of the IP models provides better estimations of mean step length, the ease of a generic adjustment can be favored when merely evaluating intra-individual differences. Further studies should determine the validity of these IP based estimations for assessing gait in daily life. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Shift Verification and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Tara M.; Evans, Thomas M.; Davidson, Gregory G

    2016-09-07

    This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over amore » burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.« less

  16. Development and validation of the Body and Appearance Self-Conscious Emotions Scale (BASES).

    PubMed

    Castonguay, Andrée L; Sabiston, Catherine M; Crocker, Peter R E; Mack, Diane E

    2014-03-01

    The purpose of these studies was to develop a psychometrically sound measure of shame, guilt, authentic pride, and hubristic pride for use in body and appearance contexts. In Study 1, 41 potential items were developed and assessed for item quality and comprehension. In Study 2, a panel of experts (N=8; M=11, SD=6.5 years of experience) reviewed the scale and items for evidence of content validity. Participants in Study 3 (n=135 males, n=300 females) completed the BASES and various body image, personality, and emotion scales. A separate sample (n=155; 35.5% male) in Study 3 completed the BASES twice using a two-week time interval. The BASES subscale scores demonstrated evidence for internal consistency, item-total correlations, concurrent, convergent, incremental, and discriminant validity, and 2-week test-retest reliability. The 4-factor solution was a good fit in confirmatory factor analysis, reflecting body-related shame, guilt, authentic and hubristic pride subscales of the BASES. The development and validation of the BASES may help advance body image and self-conscious emotion research by providing a foundation to examine the unique antecedents and outcomes of these specific emotional experiences. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Validation of a Computational Fluid Dynamics (CFD) Code for Supersonic Axisymmetric Base Flow

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin

    1993-01-01

    The ability to accurately and efficiently calculate the flow structure in the base region of bodies of revolution in supersonic flight is a significant step in CFD code validation for applications ranging from base heating for rockets to drag for protectives. The FDNS code is used to compute such a flow and the results are compared to benchmark quality experimental data. Flowfield calculations are presented for a cylindrical afterbody at M = 2.46 and angle of attack a = O. Grid independent solutions are compared to mean velocity profiles in the separated wake area and downstream of the reattachment point. Additionally, quantities such as turbulent kinetic energy and shear layer growth rates are compared to the data. Finally, the computed base pressures are compared to the measured values. An effort is made to elucidate the role of turbulence models in the flowfield predictions. The level of turbulent eddy viscosity, and its origin, are used to contrast the various turbulence models and compare the results to the experimental data.

  18. An Engineering Method of Civil Jet Requirements Validation Based on Requirements Project Principle

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Gao, Dan; Mao, Xuming

    2018-03-01

    A method of requirements validation is developed and defined to meet the needs of civil jet requirements validation in product development. Based on requirements project principle, this method will not affect the conventional design elements, and can effectively connect the requirements with design. It realizes the modern civil jet development concept, which is “requirement is the origin, design is the basis”. So far, the method has been successfully applied in civil jet aircraft development in China. Taking takeoff field length as an example, the validation process and the validation method of the requirements are detailed introduced in the study, with the hope of providing the experiences to other civil jet product design.

  19. Hierarchical calibration and validation of computational fluid dynamics models for solid sorbent-based carbon capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Canhai; Xu, Zhijie; Pan, Wenxiao

    2016-01-01

    To quantify the predictive confidence of a solid sorbent-based carbon capture design, a hierarchical validation methodology—consisting of basic unit problems with increasing physical complexity coupled with filtered model-based geometric upscaling has been developed and implemented. This paper describes the computational fluid dynamics (CFD) multi-phase reactive flow simulations and the associated data flows among different unit problems performed within the said hierarchical validation approach. The bench-top experiments used in this calibration and validation effort were carefully designed to follow the desired simple-to-complex unit problem hierarchy, with corresponding data acquisition to support model parameters calibrations at each unit problem level. A Bayesianmore » calibration procedure is employed and the posterior model parameter distributions obtained at one unit-problem level are used as prior distributions for the same parameters in the next-tier simulations. Overall, the results have demonstrated that the multiphase reactive flow models within MFIX can be used to capture the bed pressure, temperature, CO2 capture capacity, and kinetics with quantitative accuracy. The CFD modeling methodology and associated uncertainty quantification techniques presented herein offer a solid framework for estimating the predictive confidence in the virtual scale up of a larger carbon capture device.« less

  20. Systematic Development and Validation of a Theory-Based Questionnaire to Assess Toddler Feeding12

    PubMed Central

    Hurley, Kristen M.; Pepper, M. Reese; Candelaria, Margo; Wang, Yan; Caulfield, Laura E.; Latta, Laura; Hager, Erin R.; Black, Maureen M.

    2013-01-01

    This paper describes the development and validation of a 27-item caregiver-reported questionnaire on toddler feeding. The development of the Toddler Feeding Behavior Questionnaire was based on a theory of interactive feeding that incorporates caregivers’ responses to concerns about their children’s dietary intake, appetite, size, and behaviors rather than relying exclusively on caregiver actions. Content validity included review by an expert panel (n = 7) and testing in a pilot sample (n = 105) of low-income mothers of toddlers. Construct validity and reliability were assessed among a second sample of low-income mothers of predominately African-American (70%) toddlers aged 12–32 mo (n = 297) participating in the baseline evaluation of a toddler overweight prevention study. Internal consistency (Cronbach’s α: 0.64–0.87) and test-retest (0.57–0.88) reliability were acceptable for most constructs. Exploratory and confirmatory factor analyses revealed 5 theoretically derived constructs of feeding: responsive, forceful/pressuring, restrictive, indulgent, and uninvolved (root mean square error of approximation = 0.047, comparative fit index = 0.90, standardized root mean square residual = 0.06). Statistically significant (P < 0.05) convergent validity results further validated the scale, confirming established relations between feeding behaviors, toddler overweight status, perceived toddler fussiness, and maternal mental health. The Toddler Feeding Behavior Questionnaire adds to the field by providing a brief instrument that can be administered in 5 min to examine how caregiver-reported feeding behaviors relate to toddler health and behavior. PMID:24068792

  1. Addressing Participant Validity in a Small Internet Health Survey (The Restore Study): Protocol and Recommendations for Survey Response Validation

    PubMed Central

    Dewitt, James; Capistrant, Benjamin; Kohli, Nidhi; Mitteldorf, Darryl; Merengwa, Enyinnaya; West, William

    2018-01-01

    Background While deduplication and cross-validation protocols have been recommended for large Web-based studies, protocols for survey response validation of smaller studies have not been published. Objective This paper reports the challenges of survey validation inherent in a small Web-based health survey research. Methods The subject population was North American, gay and bisexual, prostate cancer survivors, who represent an under-researched, hidden, difficult-to-recruit, minority-within-a-minority population. In 2015-2016, advertising on a large Web-based cancer survivor support network, using email and social media, yielded 478 completed surveys. Results Our manual deduplication and cross-validation protocol identified 289 survey submissions (289/478, 60.4%) as likely spam, most stemming from advertising on social media. The basic components of this deduplication and validation protocol are detailed. An unexpected challenge encountered was invalid survey responses evolving across the study period. This necessitated the static detection protocol be augmented with a dynamic one. Conclusions Five recommendations for validation of Web-based samples, especially with smaller difficult-to-recruit populations, are detailed. PMID:29691203

  2. Validity of a computerized population registry of dementia based on clinical databases.

    PubMed

    Mar, J; Arrospide, A; Soto-Gordoa, M; Machón, M; Iruin, Á; Martinez-Lage, P; Gabilondo, A; Moreno-Izco, F; Gabilondo, A; Arriola, L

    2018-05-08

    The handling of information through digital media allows innovative approaches for identifying cases of dementia through computerized searches within the clinical databases that include systems for coding diagnoses. The aim of this study was to analyze the validity of a dementia registry in Gipuzkoa based on the administrative and clinical databases existing in the Basque Health Service. This is a descriptive study based on the evaluation of available data sources. First, through review of medical records, the diagnostic validity was evaluated in 2 samples of cases identified and not identified as dementia. The sensitivity, specificity and positive and negative predictive value of the diagnosis of dementia were measured. Subsequently, the cases of living dementia in December 31, 2016 were searched in the entire Gipuzkoa population to collect sociodemographic and clinical variables. The validation samples included 986 cases and 327 no cases. The calculated sensitivity was 80.2% and the specificity was 99.9%. The negative predictive value was 99.4% and positive value was 95.1%. The cases in Gipuzkoa were 10,551, representing 65% of the cases predicted according to the literature. Antipsychotic medication were taken by a 40% and a 25% of the cases were institutionalized. A registry of dementias based on clinical and administrative databases is valid and feasible. Its main contribution is to show the dimension of dementia in the health system. Copyright © 2018 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  3. Local Validation of Global Estimates of Biosphere Properties: Synthesis of Scaling Methods and Results Across Several Major Biomes

    NASA Technical Reports Server (NTRS)

    Cohen, Warren B.; Wessman, Carol A.; Aber, John D.; VanderCaslte, John R.; Running, Steven W.

    1998-01-01

    To assist in validating future MODIS land cover, LAI, IPAR, and NPP products, this project conducted a series of prototyping exercises that resulted in enhanced understanding of the issues regarding such validation. As a result, we have several papers to appear as a special issue of Remote Sensing of Environment in 1999. Also, we have been successful at obtaining a follow-on grant to pursue actual validation of these products over the next several years. This document consists of a delivery letter, including a listing of published papers.

  4. A blood-based predictor for neocortical Aβ burden in Alzheimer's disease: results from the AIBL study.

    PubMed

    Burnham, S C; Faux, N G; Wilson, W; Laws, S M; Ames, D; Bedo, J; Bush, A I; Doecke, J D; Ellis, K A; Head, R; Jones, G; Kiiveri, H; Martins, R N; Rembach, A; Rowe, C C; Salvado, O; Macaulay, S L; Masters, C L; Villemagne, V L

    2014-04-01

    Dementia is a global epidemic with Alzheimer's disease (AD) being the leading cause. Early identification of patients at risk of developing AD is now becoming an international priority. Neocortical Aβ (extracellular β-amyloid) burden (NAB), as assessed by positron emission tomography (PET), represents one such marker for early identification. These scans are expensive and are not widely available, thus, there is a need for cheaper and more widely accessible alternatives. Addressing this need, a blood biomarker-based signature having efficacy for the prediction of NAB and which can be easily adapted for population screening is described. Blood data (176 analytes measured in plasma) and Pittsburgh Compound B (PiB)-PET measurements from 273 participants from the Australian Imaging, Biomarkers and Lifestyle (AIBL) study were utilised. Univariate analysis was conducted to assess the difference of plasma measures between high and low NAB groups, and cross-validated machine-learning models were generated for predicting NAB. These models were applied to 817 non-imaged AIBL subjects and 82 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) for validation. Five analytes showed significant difference between subjects with high compared to low NAB. A machine-learning model (based on nine markers) achieved sensitivity and specificity of 80 and 82%, respectively, for predicting NAB. Validation using the ADNI cohort yielded similar results (sensitivity 79% and specificity 76%). These results show that a panel of blood-based biomarkers is able to accurately predict NAB, supporting the hypothesis for a relationship between a blood-based signature and Aβ accumulation, therefore, providing a platform for developing a population-based screen.

  5. Psychometric instrumentation: reliability and validity of instruments used for clinical practice, evidence-based practice projects and research studies.

    PubMed

    Mayo, Ann M

    2015-01-01

    It is important for CNSs and other APNs to consider the reliability and validity of instruments chosen for clinical practice, evidence-based practice projects, or research studies. Psychometric testing uses specific research methods to evaluate the amount of error associated with any particular instrument. Reliability estimates explain more about how well the instrument is designed, whereas validity estimates explain more about scores that are produced by the instrument. An instrument may be architecturally sound overall (reliable), but the same instrument may not be valid. For example, if a specific group does not understand certain well-constructed items, then the instrument does not produce valid scores when used with that group. Many instrument developers may conduct reliability testing only once, yet continue validity testing in different populations over many years. All CNSs should be advocating for the use of reliable instruments that produce valid results. Clinical nurse specialists may find themselves in situations where reliability and validity estimates for some instruments that are being utilized are unknown. In such cases, CNSs should engage key stakeholders to sponsor nursing researchers to pursue this most important work.

  6. Performing Verification and Validation in Reuse-Based Software Engineering

    NASA Technical Reports Server (NTRS)

    Addy, Edward A.

    1999-01-01

    The implementation of reuse-based software engineering not only introduces new activities to the software development process, such as domain analysis and domain modeling, it also impacts other aspects of software engineering. Other areas of software engineering that are affected include Configuration Management, Testing, Quality Control, and Verification and Validation (V&V). Activities in each of these areas must be adapted to address the entire domain or product line rather than a specific application system. This paper discusses changes and enhancements to the V&V process, in order to adapt V&V to reuse-based software engineering.

  7. Initial Retrieval Validation from the Joint Airborne IASI Validation Experiment (JAIVEx)

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Liu, Xu; Smith, WIlliam L.; Larar, Allen M.; Taylor, Jonathan P.; Revercomb, Henry E.; Mango, Stephen A.; Schluessel, Peter; Calbet, Xavier

    2007-01-01

    The Joint Airborne IASI Validation Experiment (JAIVEx) was conducted during April 2007 mainly for validation of the Infrared Atmospheric Sounding Interferometer (IASI) on the MetOp satellite, but also included a strong component focusing on validation of the Atmospheric InfraRed Sounder (AIRS) aboard the AQUA satellite. The cross validation of IASI and AIRS is important for the joint use of their data in the global Numerical Weather Prediction process. Initial inter-comparisons of geophysical products have been conducted from different aspects, such as using different measurements from airborne ultraspectral Fourier transform spectrometers (specifically, the NPOESS Airborne Sounder Testbed Interferometer (NAST-I) and the Scanning-High resolution Interferometer Sounder (S-HIS) aboard the NASA WB-57 aircraft), UK Facility for Airborne Atmospheric Measurements (FAAM) BAe146-301 aircraft insitu instruments, dedicated dropsondes, radiosondes, and ground based Raman Lidar. An overview of the JAIVEx retrieval validation plan and some initial results of this field campaign are presented.

  8. Statistical Validation of Image Segmentation Quality Based on a Spatial Overlap Index1

    PubMed Central

    Zou, Kelly H.; Warfield, Simon K.; Bharatha, Aditya; Tempany, Clare M.C.; Kaus, Michael R.; Haker, Steven J.; Wells, William M.; Jolesz, Ferenc A.; Kikinis, Ron

    2005-01-01

    Rationale and Objectives To examine a statistical validation method based on the spatial overlap between two sets of segmentations of the same anatomy. Materials and Methods The Dice similarity coefficient (DSC) was used as a statistical validation metric to evaluate the performance of both the reproducibility of manual segmentations and the spatial overlap accuracy of automated probabilistic fractional segmentation of MR images, illustrated on two clinical examples. Example 1: 10 consecutive cases of prostate brachytherapy patients underwent both preoperative 1.5T and intraoperative 0.5T MR imaging. For each case, 5 repeated manual segmentations of the prostate peripheral zone were performed separately on preoperative and on intraoperative images. Example 2: A semi-automated probabilistic fractional segmentation algorithm was applied to MR imaging of 9 cases with 3 types of brain tumors. DSC values were computed and logit-transformed values were compared in the mean with the analysis of variance (ANOVA). Results Example 1: The mean DSCs of 0.883 (range, 0.876–0.893) with 1.5T preoperative MRI and 0.838 (range, 0.819–0.852) with 0.5T intraoperative MRI (P < .001) were within and at the margin of the range of good reproducibility, respectively. Example 2: Wide ranges of DSC were observed in brain tumor segmentations: Meningiomas (0.519–0.893), astrocytomas (0.487–0.972), and other mixed gliomas (0.490–0.899). Conclusion The DSC value is a simple and useful summary measure of spatial overlap, which can be applied to studies of reproducibility and accuracy in image segmentation. We observed generally satisfactory but variable validation results in two clinical applications. This metric may be adapted for similar validation tasks. PMID:14974593

  9. Validity of proposed DSM-5 diagnostic criteria for nicotine use disorder: results from 734 Israeli lifetime smokers

    PubMed Central

    Shmulewitz, D.; Wall, M.M.; Aharonovich, E.; Spivak, B.; Weizman, A.; Frisch, A.; Grant, B. F.; Hasin, D.

    2013-01-01

    Background The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) proposes aligning nicotine use disorder (NUD) criteria with those for other substances, by including the current DSM fourth edition (DSM-IV) nicotine dependence (ND) criteria, three abuse criteria (neglect roles, hazardous use, interpersonal problems) and craving. Although NUD criteria indicate one latent trait, evidence is lacking on: (1) validity of each criterion; (2) validity of the criteria as a set; (3) comparative validity between DSM-5 NUD and DSM-IV ND criterion sets; and (4) NUD prevalence. Method Nicotine criteria (DSM-IV ND, abuse and craving) and external validators (e.g. smoking soon after awakening, number of cigarettes per day) were assessed with a structured interview in 734 lifetime smokers from an Israeli household sample. Regression analysis evaluated the association between validators and each criterion. Receiver operating characteristic analysis assessed the association of the validators with the DSM-5 NUD set (number of criteria endorsed) and tested whether DSM-5 or DSM-IV provided the most discriminating criterion set. Changes in prevalence were examined. Results Each DSM-5 NUD criterion was significantly associated with the validators, with strength of associations similar across the criteria. As a set, DSM-5 criteria were significantly associated with the validators, were significantly more discriminating than DSM-IV ND criteria, and led to increased prevalence of binary NUD (two or more criteria) over ND. Conclusions All findings address previous concerns about the DSM-IV nicotine diagnosis and its criteria and support the proposed changes for DSM-5 NUD, which should result in improved diagnosis of nicotine disorders. PMID:23312475

  10. An intercomparison of a large ensemble of statistical downscaling methods for Europe: Overall results from the VALUE perfect predictor cross-validation experiment

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data

  11. Development and Validation of a Christian-Based Grief Recovery Scale

    ERIC Educational Resources Information Center

    Jen Der Pan, Peter; Deng, Liang-Yu F.; Tsai, S. L.; Chen, Ho-Yuan J.; Yuan, Sheng-Shiou Jenny

    2014-01-01

    The purpose of this study was to develop and validate a Christian-based Grief Recovery Scale (CGRS) which was used to measure Christians recovering from grief after a significant loss. Taiwanese Christian participants were recruited from churches and a comprehensive university in northern Taiwan. They were affected by both the Christian faith and…

  12. A Risk Prediction Model for Sporadic CRC Based on Routine Lab Results.

    PubMed

    Boursi, Ben; Mamtani, Ronac; Hwang, Wei-Ting; Haynes, Kevin; Yang, Yu-Xiao

    2016-07-01

    Current risk scores for colorectal cancer (CRC) are based on demographic and behavioral factors and have limited predictive values. To develop a novel risk prediction model for sporadic CRC using clinical and laboratory data in electronic medical records. We conducted a nested case-control study in a UK primary care database. Cases included those with a diagnostic code of CRC, aged 50-85. Each case was matched with four controls using incidence density sampling. CRC predictors were examined using univariate conditional logistic regression. Variables with p value <0.25 in the univariate analysis were further evaluated in multivariate models using backward elimination. Discrimination was assessed using receiver operating curve. Calibration was evaluated using the McFadden's R2. Net reclassification index (NRI) associated with incorporation of laboratory results was calculated. Results were internally validated. A model similar to existing CRC prediction models which included age, sex, height, obesity, ever smoking, alcohol dependence, and previous screening colonoscopy had an AUC of 0.58 (0.57-0.59) with poor goodness of fit. A laboratory-based model including hematocrit, MCV, lymphocytes, and neutrophil-lymphocyte ratio (NLR) had an AUC of 0.76 (0.76-0.77) and a McFadden's R2 of 0.21 with a NRI of 47.6 %. A combined model including sex, hemoglobin, MCV, white blood cells, platelets, NLR, and oral hypoglycemic use had an AUC of 0.80 (0.79-0.81) with a McFadden's R2 of 0.27 and a NRI of 60.7 %. Similar results were shown in an internal validation set. A laboratory-based risk model had good predictive power for sporadic CRC risk.

  13. Population-based validation of a German version of the Brief Resilience Scale

    PubMed Central

    Wenzel, Mario; Stieglitz, Rolf-Dieter; Kunzler, Angela; Bagusat, Christiana; Helmreich, Isabella; Gerlicher, Anna; Kampa, Miriam; Kubiak, Thomas; Kalisch, Raffael; Lieb, Klaus; Tüscher, Oliver

    2018-01-01

    Smith and colleagues developed the Brief Resilience Scale (BRS) to assess the individual ability to recover from stress despite significant adversity. This study aimed to validate the German version of the BRS. We used data from a population-based (sample 1: n = 1.481) and a representative (sample 2: n = 1.128) sample of participants from the German general population (age ≥ 18) to assess reliability and validity. Confirmatory factor analyses (CFA) were conducted to compare one- and two-factorial models from previous studies with a method-factor model which especially accounts for the wording of the items. Reliability was analyzed. Convergent validity was measured by correlating BRS scores with mental health measures, coping, social support, and optimism. Reliability was good (α = .85, ω = .85 for both samples). The method-factor model showed excellent model fit (sample 1: χ2/df = 7.544; RMSEA = .07; CFI = .99; SRMR = .02; sample 2: χ2/df = 1.166; RMSEA = .01; CFI = 1.00; SRMR = .01) which was significantly better than the one-factor model (Δχ2(4) = 172.71, p < .001) or the two-factor model (Δχ2(3) = 31.16, p < .001). The BRS was positively correlated with well-being, social support, optimism, and the coping strategies active coping, positive reframing, acceptance, and humor. It was negatively correlated with somatic symptoms, anxiety and insomnia, social dysfunction, depression, and the coping strategies religion, denial, venting, substance use, and self-blame. To conclude, our results provide evidence for the reliability and validity of the German adaptation of the BRS as well as the unidimensional structure of the scale once method effects are accounted for. PMID:29438435

  14. Development and validation of a smartphone-based digits-in-noise hearing test in South African English.

    PubMed

    Potgieter, Jenni-Marí; Swanepoel, De Wet; Myburgh, Hermanus Carel; Hopper, Thomas Christopher; Smits, Cas

    2015-07-01

    The objective of this study was to develop and validate a smartphone-based digits-in-noise hearing test for South African English. Single digits (0-9) were recorded and spoken by a first language English female speaker. Level corrections were applied to create a set of homogeneous digits with steep speech recognition functions. A smartphone application was created to utilize 120 digit-triplets in noise as test material. An adaptive test procedure determined the speech reception threshold (SRT). Experiments were performed to determine headphones effects on the SRT and to establish normative data. Participants consisted of 40 normal-hearing subjects with thresholds ≤15 dB across the frequency spectrum (250-8000 Hz) and 186 subjects with normal-hearing in both ears, or normal-hearing in the better ear. The results show steep speech recognition functions with a slope of 20%/dB for digit-triplets presented in noise using the smartphone application. The results of five headphone types indicate that the smartphone-based hearing test is reliable and can be conducted using standard Android smartphone headphones or clinical headphones. A digits-in-noise hearing test was developed and validated for South Africa. The mean SRT and speech recognition functions correspond to previous developed telephone-based digits-in-noise tests.

  15. Development and Validation of a Multimedia-Based Assessment of Scientific Inquiry Abilities

    ERIC Educational Resources Information Center

    Kuo, Che-Yu; Wu, Hsin-Kai; Jen, Tsung-Hau; Hsu, Ying-Shao

    2015-01-01

    The potential of computer-based assessments for capturing complex learning outcomes has been discussed; however, relatively little is understood about how to leverage such potential for summative and accountability purposes. The aim of this study is to develop and validate a multimedia-based assessment of scientific inquiry abilities (MASIA) to…

  16. 42 CFR 476.94 - Notice of QIO initial denial determination and changes as a result of a DRG validation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... changes as a result of a DRG validation. 476.94 Section 476.94 Public Health CENTERS FOR MEDICARE... DRG validation. (a) Notice of initial denial determination—(1) Parties to be notified. A QIO must... of identification; (vi) For retrospective review, (excluding DRG validation and post procedure review...

  17. Predictive Validity of Curriculum-Based Measures for English Learners at Varying English Proficiency Levels

    ERIC Educational Resources Information Center

    Kim, Jennifer Sun; Vanderwood, Michael L.; Lee, Catherine Y.

    2016-01-01

    This study examined the predictive validity of curriculum-based measures in reading for Spanish-speaking English learners (ELs) at various levels of English proficiency. Third-grade Spanish-speaking EL students were screened during the fall using DIBELS Oral Reading Fluency (DORF) and Daze. Predictive validity was examined in relation to spring…

  18. Validation of Satellite Aerosol Retrievals from AERONET Ground-Based Measurements

    NASA Technical Reports Server (NTRS)

    Holben, Brent; Remer, Lorraine; Torres, Omar; Zhao, Tom; Smith, David E. (Technical Monitor)

    2001-01-01

    Accurate and comprehensive assessment of the parameters that control key atmospheric and biospheric processes including assessment of anthropogenic effects on climate change is a fundamental measurement objective of NASA's EOS program (King and Greenstone, 1999). Satellite assessment programs and associated global climate models require validation and additional parameterization with frequent reliable ground-based observations. A critical and highly uncertain element of the measurement program is characterization of tropospheric aerosols requiring basic observations of aerosols optical and microphysical properties. Unfortunately as yet we do not know the aerosol burden man is contributing to the atmosphere and thus we will have no definitive measure of change for the future. This lack of aerosol assessment is the impetus for some of the EOS measurement activities (Kaufman et al., 1997; King et al., 1999) and the formation of the AERONET program (Holben et al., 1998). The goals of the AERONET program are to develop long term monitoring at globally distributed sites providing critical data for multiannual trend changes in aerosol loading and optical properties with the specific goal of providing a data base for validation of satellite derived aerosol optical properties. The AERONET program has evolved into an international federated network of approximately 100 ground-based remote sensing monitoring stations to characterize the optical and microphysical properties of aerosols.

  19. Detecting symptom exaggeration in combat veterans using the MMPI-2 symptom validity scales: a mixed group validation.

    PubMed

    Tolin, David F; Steenkamp, Maria M; Marx, Brian P; Litz, Brett T

    2010-12-01

    Although validity scales of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2; J. N. Butcher, W. G. Dahlstrom, J. R. Graham, A. Tellegen, & B. Kaemmer, 1989) have proven useful in the detection of symptom exaggeration in criterion-group validation (CGV) studies, usually comparing instructed feigners with known patient groups, the application of these scales has been problematic when assessing combat veterans undergoing posttraumatic stress disorder (PTSD) examinations. Mixed group validation (MGV) was employed to determine the efficacy of MMPI-2 exaggeration scales in compensation-seeking (CS) and noncompensation-seeking (NCS) veterans. Unlike CGV, MGV allows for a mix of exaggerating and nonexaggerating individuals in each group, does not require that the exaggeration versus nonexaggerating status of any individual be known, and can be adjusted for different base-rate estimates. MMPI-2 responses of 377 male veterans were examined according to CS versus NCS status. MGV was calculated using 4 sets of base-rate estimates drawn from the literature. The validity scales generally performed well (adequate sensitivity, specificity, and efficiency) under most base-rate estimations, and most produced cutoff scores that showed adequate detection of symptom exaggeration, regardless of base-rate assumptions. These results support the use of MMPI-2 validity scales for PTSD evaluations in veteran populations, even under varying base rates of symptom exaggeration.

  20. Reproducibility, Reliability, and Validity of Fuchsin-Based Beads for the Evaluation of Masticatory Performance.

    PubMed

    Sánchez-Ayala, Alfonso; Farias-Neto, Arcelino; Vilanova, Larissa Soares Reis; Costa, Marina Abrantes; Paiva, Ana Clara Soares; Carreiro, Adriana da Fonte Porto; Mestriner-Junior, Wilson

    2016-08-01

    Rehabilitation of masticatory function is inherent to prosthodontics; however, despite the various techniques for evaluating oral comminution, the methodological suitability of these has not been completely studied. The aim of this study was to determine the reproducibility, reliability, and validity of a test food based on fuchsin beads for masticatory function assessment. Masticatory performance was evaluated in 20 dentate subjects (mean age, 23.3 years) using two kinds of test foods and methods: fuchsin beads and ultraviolet-visible spectrophotometry, and silicone cubes and multiple sieving as gold standard. Three examiners conducted five masticatory performance trials with each test food. Reproducibility of the results from both test foods was separately assessed using the intraclass correlation coefficient (ICC). Reliability and validity of fuchsin bead data were measured by comparing the average mean of absolute differences and the measurement means, respectively, regarding silicone cube data using the paired Student's t-test (α = 0.05). Intraexaminer and interexaminer ICC for the fuchsin bead values were 0.65 and 0.76 (p < 0.001), respectively; those for the silicone cubes values were 0.93 and 0.91 (p < 0.001), respectively. Reliability revealed intraexaminer (p < 0.001) and interexaminer (p < 0.05) differences between the average means of absolute differences of each test foods. Validity also showed differences between the measurement means of each test food (p < 0.001). Intra- and interexaminer reproducibility of the test food based on fuchsin beads for evaluation of masticatory performance were good and excellent, respectively; however, the reliability and validity were low, because fuchsin beads do not measure the grinding capacity of masticatory function as silicone cubes do; instead, this test food describes the crushing potential of teeth. Thus, the two kinds of test foods evaluate different properties of masticatory capacity, confirming fushsin

  1. The Development and Validation of the School-Based Counseling Self-Efficacy Scale

    ERIC Educational Resources Information Center

    Boughfman, Erica M.

    2010-01-01

    The purpose of this study was to develop and validate the School-Based Counseling Self-Efficacy Scale (SB-SES). Two hundred sixty-five (N = 265) licensed mental health professionals participated in this study. Fifty-eight percent of the participants reported experience working as a school-based counselor with the remaining 42% reporting no…

  2. Validation of a personalized dosimetric evaluation tool (Oedipe) for targeted radiotherapy based on the Monte Carlo MCNPX code

    NASA Astrophysics Data System (ADS)

    Chiavassa, S.; Aubineau-Lanièce, I.; Bitar, A.; Lisbona, A.; Barbet, J.; Franck, D.; Jourdain, J. R.; Bardiès, M.

    2006-02-01

    Dosimetric studies are necessary for all patients treated with targeted radiotherapy. In order to attain the precision required, we have developed Oedipe, a dosimetric tool based on the MCNPX Monte Carlo code. The anatomy of each patient is considered in the form of a voxel-based geometry created using computed tomography (CT) images or magnetic resonance imaging (MRI). Oedipe enables dosimetry studies to be carried out at the voxel scale. Validation of the results obtained by comparison with existing methods is complex because there are multiple sources of variation: calculation methods (different Monte Carlo codes, point kernel), patient representations (model or specific) and geometry definitions (mathematical or voxel-based). In this paper, we validate Oedipe by taking each of these parameters into account independently. Monte Carlo methodology requires long calculation times, particularly in the case of voxel-based geometries, and this is one of the limits of personalized dosimetric methods. However, our results show that the use of voxel-based geometry as opposed to a mathematically defined geometry decreases the calculation time two-fold, due to an optimization of the MCNPX2.5e code. It is therefore possible to envisage the use of Oedipe for personalized dosimetry in the clinical context of targeted radiotherapy.

  3. Helicopter simulation validation using flight data

    NASA Technical Reports Server (NTRS)

    Key, D. L.; Hansen, R. S.; Cleveland, W. B.; Abbott, W. Y.

    1982-01-01

    A joint NASA/Army effort to perform a systematic ground-based piloted simulation validation assessment is described. The best available mathematical model for the subject helicopter (UH-60A Black Hawk) was programmed for real-time operation. Flight data were obtained to validate the math model, and to develop models for the pilot control strategy while performing mission-type tasks. The validated math model is to be combined with motion and visual systems to perform ground based simulation. Comparisons of the control strategy obtained in flight with that obtained on the simulator are to be used as the basis for assessing the fidelity of the results obtained in the simulator.

  4. Model-Based Verification and Validation of the SMAP Uplink Processes

    NASA Technical Reports Server (NTRS)

    Khan, M. Omair; Dubos, Gregory F.; Tirona, Joseph; Standley, Shaun

    2013-01-01

    This case study stands as an example of how a project can validate a system-level design earlier in the project life cycle than traditional V&V processes by using simulation on a system model. Specifically, this paper describes how simulation was added to a system model of the Soil Moisture Active-Passive (SMAP) mission's uplink process.Also discussed are the advantages and disadvantages of the methods employed and the lessons learned; which are intended to benefit future model-based and simulation-based V&V development efforts.

  5. Physics-based method to validate and repair flaws in protein structures

    PubMed Central

    Martin, Osvaldo A.; Arnautova, Yelena A.; Icazatti, Alejandro A.; Scheraga, Harold A.; Vila, Jorge A.

    2013-01-01

    A method that makes use of information provided by the combination of 13Cα and 13Cβ chemical shifts, computed at the density functional level of theory, enables one to (i) validate, at the residue level, conformations of proteins and detect backbone or side-chain flaws by taking into account an ensemble average of chemical shifts over all of the conformations used to represent a protein, with a sensitivity of ∼90%; and (ii) provide a set of (χ1/χ2) torsional angles that leads to optimal agreement between the observed and computed 13Cα and 13Cβ chemical shifts. The method has been incorporated into the CheShift-2 protein validation Web server. To test the reliability of the provided set of (χ1/χ2) torsional angles, the side chains of all reported conformations of five NMR-determined protein models were refined by a simple routine, without using NOE-based distance restraints. The refinement of each of these five proteins leads to optimal agreement between the observed and computed 13Cα and 13Cβ chemical shifts for ∼94% of the flaws, on average, without introducing a significantly large number of violations of the NOE-based distance restraints for a distance range ≤ 0.5 Ǻ, in which the largest number of distance violations occurs. The results of this work suggest that use of the provided set of (χ1/χ2) torsional angles together with other observables, such as NOEs, should lead to a fast and accurate refinement of the side-chain conformations of protein models. PMID:24082119

  6. Zero-G experimental validation of a robotics-based inertia identification algorithm

    NASA Astrophysics Data System (ADS)

    Bruggemann, Jeremy J.; Ferrel, Ivann; Martinez, Gerardo; Xie, Pu; Ma, Ou

    2010-04-01

    The need to efficiently identify the changing inertial properties of on-orbit spacecraft is becoming more critical as satellite on-orbit services, such as refueling and repairing, become increasingly aggressive and complex. This need stems from the fact that a spacecraft's control system relies on the knowledge of the spacecraft's inertia parameters. However, the inertia parameters may change during flight for reasons such as fuel usage, payload deployment or retrieval, and docking/capturing operations. New Mexico State University's Dynamics, Controls, and Robotics Research Group has proposed a robotics-based method of identifying unknown spacecraft inertia properties1. Previous methods require firing known thrusts then measuring the thrust, and the velocity and acceleration changes. The new method utilizes the concept of momentum conservation, while employing a robotic device powered by renewable energy to excite the state of the satellite. Thus, it requires no fuel usage or force and acceleration measurements. The method has been well studied in theory and demonstrated by simulation. However its experimental validation is challenging because a 6- degree-of-freedom motion in a zero-gravity condition is required. This paper presents an on-going effort to test the inertia identification method onboard the NASA zero-G aircraft. The design and capability of the test unit will be discussed in addition to the flight data. This paper also introduces the design and development of an airbearing based test used to partially validate the method, in addition to the approach used to obtain reference value for the test system's inertia parameters that can be used for comparison with the algorithm results.

  7. Results of Fall 2001 Pilot: Methodology for Validation of Course Prerequisites.

    ERIC Educational Resources Information Center

    Serban, Andreea M.; Fleming, Steve

    The purpose of this study was to test a methodology that will help Santa Barbara City College (SBCC), California, to validate the course prerequisites that fall under the category of highest level of scrutiny--data collection and analysis--as defined by the Chancellor's Office. This study gathered data for the validation of prerequisites for three…

  8. Reliability and Validity of the Evidence-Based Practice Confidence (EPIC) Scale

    ERIC Educational Resources Information Center

    Salbach, Nancy M.; Jaglal, Susan B.; Williams, Jack I.

    2013-01-01

    Introduction: The reliability, minimal detectable change (MDC), and construct validity of the evidence-based practice confidence (EPIC) scale were evaluated among physical therapists (PTs) in clinical practice. Methods: A longitudinal mail survey was conducted. Internal consistency and test-retest reliability were estimated using Cronbach's alpha…

  9. ENSURF: multi-model sea level forecast - implementation and validation results for the IBIROOS and Western Mediterranean regions

    NASA Astrophysics Data System (ADS)

    Pérez, B.; Brouwer, R.; Beckers, J.; Paradis, D.; Balseiro, C.; Lyons, K.; Cure, M.; Sotillo, M. G.; Hackett, B.; Verlaan, M.; Fanjul, E. A.

    2012-03-01

    ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast that makes use of several storm surge or circulation models and near-real time tide gauge data in the region, with the following main goals: 1. providing easy access to existing forecasts, as well as to its performance and model validation, by means of an adequate visualization tool; 2. generation of better forecasts of sea level, including confidence intervals, by means of the Bayesian Model Average technique (BMA). The Bayesian Model Average technique generates an overall forecast probability density function (PDF) by making a weighted average of the individual forecasts PDF's; the weights represent the Bayesian likelihood that a model will give the correct forecast and are continuously updated based on the performance of the models during a recent training period. This implies the technique needs the availability of sea level data from tide gauges in near-real time. The system was implemented for the European Atlantic facade (IBIROOS region) and Western Mediterranean coast based on the MATROOS visualization tool developed by Deltares. Results of validation of the different models and BMA implementation for the main harbours are presented for these regions where this kind of activity is performed for the first time. The system is currently operational at Puertos del Estado and has proved to be useful in the detection of calibration problems in some of the circulation models, in the identification of the systematic differences between baroclinic and barotropic models for sea level forecasts and to demonstrate the feasibility of providing an overall probabilistic forecast, based on the BMA method.

  10. Feasibility and validity of International Classification of Diseases based case mix indices.

    PubMed

    Yang, Che-Ming; Reinke, William

    2006-10-06

    Severity of illness is an omnipresent confounder in health services research. Resource consumption can be applied as a proxy of severity. The most commonly cited hospital resource consumption measure is the case mix index (CMI) and the best-known illustration of the CMI is the Diagnosis Related Group (DRG) CMI used by Medicare in the U.S. For countries that do not have DRG type CMIs, the adjustment for severity has been troublesome for either reimbursement or research purposes. The research objective of this study is to ascertain the construct validity of CMIs derived from International Classification of Diseases (ICD) in comparison with DRG CMI. The study population included 551 acute care hospitals in Taiwan and 2,462,006 inpatient reimbursement claims. The 18th version of GROUPER, the Medicare DRG classification software, was applied to Taiwan's 1998 National Health Insurance (NHI) inpatient claim data to derive the Medicare DRG CMI. The same weighting principles were then applied to determine the ICD principal diagnoses and procedures based costliness and length of stay (LOS) CMIs. Further analyses were conducted based on stratifications according to teaching status, accreditation levels, and ownership categories. The best ICD-based substitute for the DRG costliness CMI (DRGCMI) is the ICD principal diagnosis costliness CMI (ICDCMI-DC) in general and in most categories with Spearman's correlation coefficients ranging from 0.938-0.462. The highest correlation appeared in the non-profit sector. ICD procedure costliness CMI (ICDCMI-PC) outperformed ICDCMI-DC only at the medical center level, which consists of tertiary care hospitals and is more procedure intensive. The results of our study indicate that an ICD-based CMI can quite fairly approximate the DRGCMI, especially ICDCMI-DC. Therefore, substituting ICDs for DRGs in computing the CMI ought to be feasible and valid in countries that have not implemented DRGs.

  11. Adjustment and validation of a simulation tool for CSP plants based on parabolic trough technology

    NASA Astrophysics Data System (ADS)

    García-Barberena, Javier; Ubani, Nora

    2016-05-01

    The present work presents the validation process carried out for a simulation tool especially designed for the energy yield assessment of concentrating solar plants based on parabolic through (PT) technology. The validation has been carried out by comparing the model estimations with real data collected from a commercial CSP plant. In order to adjust the model parameters used for the simulation, 12 different days were selected among one-year of operational data measured at the real plant. The 12 days were simulated and the estimations compared with the measured data, focusing on the most important variables from the simulation point of view: temperatures, pressures and mass flow of the solar field, gross power, parasitic power, and net power delivered by the plant. Based on these 12 days, the key parameters for simulating the model were properly fixed and the simulation of a whole year performed. The results obtained for a complete year simulation showed very good agreement for the gross and net electric total production. The estimations for these magnitudes show a 1.47% and 2.02% BIAS respectively. The results proved that the simulation software describes with great accuracy the real operation of the power plant and correctly reproduces its transient behavior.

  12. Knowledge based system verification and validation as related to automation of space station subsystems: Rationale for a knowledge based system lifecycle

    NASA Technical Reports Server (NTRS)

    Richardson, Keith; Wong, Carla

    1988-01-01

    The role of verification and validation (V and V) in software has been to support and strengthen the software lifecycle and to ensure that the resultant code meets the standards of the requirements documents. Knowledge Based System (KBS) V and V should serve the same role, but the KBS lifecycle is ill-defined. The rationale of a simple form of the KBS lifecycle is presented, including accommodation to certain critical KBS differences from software development.

  13. Validity of an Exercise Test Based on Habitual Gait Speed in Mobility-Limited Older Adults

    PubMed Central

    Li, Xin; Forman, Daniel E.; Kiely, Dan K.; LaRose, Sharon; Hirschberg, Ronald; Frontera, Walter R.; Bean, Jonathan F.

    2013-01-01

    Objective To evaluate whether a customized exercise tolerance testing (ETT) protocol based on an individual’s habitual gait speed (HGS) on level ground would be a valid mode of exercise testing older adults. Although ETT provides a useful means to risk-stratify adults, age-related declines in gait speed paradoxically limit the utility of standard ETT protocols for evaluating older adults. A customized ETT protocol may be a useful alternative to these standard methods, and this study hypothesized that this alternative approach would be valid. Design We performed a cross-sectional analysis of baseline data from a randomized controlled trial of older adults with observed mobility problems. Screening was performed using a treadmill-based ETT protocol customized for each individual’s HGS. We determined the content validity by assessing the results of the ETTs, and we evaluated the construct validity of treadmill time in relation to the Physical Activity Scale for the Elderly (PASE) and the Late Life Function and Disability Instrument (LLFDI). Setting Outpatient rehabilitation center. Participants Community-dwelling, mobility-limited older adults (N = 141). Interventions Not applicable. Main Outcome Measures Cardiac instability, ETT duration, peak heart rate, peak systolic blood pressure, PASE, and LLFDI. Results Acute cardiac instability was identified in 4 of the participants who underwent ETT. The remaining participants (n = 137, 68% female; mean age, 75.3y) were included in the subsequent analyses. Mean exercise duration was 9.39 minutes, with no significant differences in durations being observed after evaluating among tertiles by HGS status. Mean peak heart rate and mean peak systolic blood pressure were 126.6 beats/ min and 175.0mmHg, respectively. Within separate multivariate models, ETT duration in each of the 3 gait speed groups was significantly associated (P<.05) with PASE and LLFDI. Conclusions Mobility-limited older adults can complete this customized

  14. Health Sciences-Evidence Based Practice questionnaire (HS-EBP) for measuring transprofessional evidence-based practice: Creation, development and psychometric validation.

    PubMed

    Fernández-Domínguez, Juan Carlos; de Pedro-Gómez, Joan Ernest; Morales-Asencio, José Miguel; Bennasar-Veny, Miquel; Sastre-Fullana, Pedro; Sesé-Abad, Albert

    2017-01-01

    Most of the EBP measuring instruments available to date present limitations both in the operationalisation of the construct and also in the rigour of their psychometric development, as revealed in the literature review performed. The aim of this paper is to provide rigorous and adequate reliability and validity evidence of the scores of a new transdisciplinary psychometric tool, the Health Sciences Evidence-Based Practice (HS-EBP), for measuring the construct EBP in Health Sciences professionals. A pilot study and a subsequent two-stage validation test sample were conducted to progressively refine the instrument until a reduced 60-item version with a five-factor latent structure. Reliability was analysed through both Cronbach's alpha coefficient and intraclass correlations (ICC). Latent structure was contrasted using confirmatory factor analysis (CFA) following a model comparison aproach. Evidence of criterion validity of the scores obtained was achieved by considering attitudinal resistance to change, burnout, and quality of professional life as criterion variables; while convergent validity was assessed using the Spanish version of the Evidence-Based Practice Questionnaire (EBPQ-19). Adequate evidence of both reliability and ICC was obtained for the five dimensions of the questionnaire. According to the CFA model comparison, the best fit corresponded to the five-factor model (RMSEA = 0.049; CI 90% RMSEA = [0.047; 0.050]; CFI = 0.99). Adequate criterion and convergent validity evidence was also provided. Finally, the HS-EBP showed the capability to find differences between EBP training levels as an important evidence of decision validity. Reliability and validity evidence obtained regarding the HS-EBP confirm the adequate operationalisation of the EBP construct as a process put into practice to respond to every clinical situation arising in the daily practice of professionals in health sciences (transprofessional). The tool could be useful for EBP individual

  15. Validation of an administrative claims-based diagnostic code for pneumonia in a US-based commercially insured COPD population.

    PubMed

    Kern, David M; Davis, Jill; Williams, Setareh A; Tunceli, Ozgur; Wu, Bingcao; Hollis, Sally; Strange, Charlie; Trudo, Frank

    2015-01-01

    To estimate the accuracy of claims-based pneumonia diagnoses in COPD patients using clinical information in medical records as the reference standard. Selecting from a repository containing members' data from 14 regional United States health plans, this validation study identified pneumonia diagnoses within a group of patients initiating treatment for COPD between March 1, 2009 and March 31, 2012. Patients with ≥1 claim for pneumonia (International Classification of Diseases Version 9-CM code 480.xx-486.xx) were identified during the 12 months following treatment initiation. A subset of 800 patients was randomly selected to abstract medical record data (paper based and electronic) for a target sample of 400 patients, to estimate validity within 5% margin of error. Positive predictive value (PPV) was calculated for the claims diagnosis of pneumonia relative to the reference standard, defined as a documented diagnosis in the medical record. A total of 388 records were reviewed; 311 included a documented pneumonia diagnosis, indicating 80.2% (95% confidence interval [CI]: 75.8% to 84.0%) of claims-identified pneumonia diagnoses were validated by the medical charts. Claims-based diagnoses in inpatient or emergency departments (n=185) had greater PPV versus outpatient settings (n=203), 87.6% (95% CI: 81.9%-92.0%) versus 73.4% (95% CI: 66.8%-79.3%), respectively. Claims-diagnoses verified with paper-based charts had similar PPV as the overall study sample, 80.2% (95% CI: 71.1%-87.5%), and higher PPV than those linked to electronic medical records, 73.3% (95% CI: 65.5%-80.2%). Combined paper-based and electronic records had a higher PPV, 87.6% (95% CI: 80.9%-92.6%). Administrative claims data indicating a diagnosis of pneumonia in COPD patients are supported by medical records. The accuracy of a medical record diagnosis of pneumonia remains unknown. With increased use of claims data in medical research, COPD researchers can study pneumonia with confidence that claims

  16. Validating College Course Placement Decisions Based on CLEP Exam Scores: CLEP Placement Validity Study Results. Statistical Report

    ERIC Educational Resources Information Center

    Godfrey, Kelly E.; Jagesic, Sanja

    2016-01-01

    The College-Level Examination Program® (CLEP®) is a computer-based prior-learning assessment that allows examinees the opportunity to demonstrate mastery of knowledge and skills necessary to earn postsecondary course credit in higher education. Currently, there are 33 exams in five subject areas: composition and literature, world languages,…

  17. Development of Learning Models Based on Problem Solving and Meaningful Learning Standards by Expert Validity for Animal Development Course

    NASA Astrophysics Data System (ADS)

    Lufri, L.; Fitri, R.; Yogica, R.

    2018-04-01

    The purpose of this study is to produce a learning model based on problem solving and meaningful learning standards by expert assessment or validation for the course of Animal Development. This research is a development research that produce the product in the form of learning model, which consist of sub product, namely: the syntax of learning model and student worksheets. All of these products are standardized through expert validation. The research data is the level of validity of all sub products obtained using questionnaire, filled by validators from various field of expertise (field of study, learning strategy, Bahasa). Data were analysed using descriptive statistics. The result of the research shows that the problem solving and meaningful learning model has been produced. Sub products declared appropriate by expert include the syntax of learning model and student worksheet.

  18. Construction of measurement uncertainty profiles for quantitative analysis of genetically modified organisms based on interlaboratory validation data.

    PubMed

    Macarthur, Roy; Feinberg, Max; Bertheau, Yves

    2010-01-01

    A method is presented for estimating the size of uncertainty associated with the measurement of products derived from genetically modified organisms (GMOs). The method is based on the uncertainty profile, which is an extension, for the estimation of uncertainty, of a recent graphical statistical tool called an accuracy profile that was developed for the validation of quantitative analytical methods. The application of uncertainty profiles as an aid to decision making and assessment of fitness for purpose is also presented. Results of the measurement of the quantity of GMOs in flour by PCR-based methods collected through a number of interlaboratory studies followed the log-normal distribution. Uncertainty profiles built using the results generally give an expected range for measurement results of 50-200% of reference concentrations for materials that contain at least 1% GMO. This range is consistent with European Network of GM Laboratories and the European Union (EU) Community Reference Laboratory validation criteria and can be used as a fitness for purpose criterion for measurement methods. The effect on the enforcement of EU labeling regulations is that, in general, an individual analytical result needs to be < 0.45% to demonstrate compliance, and > 1.8% to demonstrate noncompliance with a labeling threshold of 0.9%.

  19. Validation of XMALab software for marker-based XROMM.

    PubMed

    Knörlein, Benjamin J; Baier, David B; Gatesy, Stephen M; Laurence-Chasen, J D; Brainerd, Elizabeth L

    2016-12-01

    Marker-based XROMM requires software tools for: (1) correcting fluoroscope distortion; (2) calibrating X-ray cameras; (3) tracking radio-opaque markers; and (4) calculating rigid body motion. In this paper we describe and validate XMALab, a new open-source software package for marker-based XROMM (C++ source and compiled versions on Bitbucket). Most marker-based XROMM studies to date have used XrayProject in MATLAB. XrayProject can produce results with excellent accuracy and precision, but it is somewhat cumbersome to use and requires a MATLAB license. We have designed XMALab to accelerate the XROMM process and to make it more accessible to new users. Features include the four XROMM steps (listed above) in one cohesive user interface, real-time plot windows for detecting errors, and integration with an online data management system, XMAPortal. Accuracy and precision of XMALab when tracking markers in a machined object are ±0.010 and ±0.043 mm, respectively. Mean precision for nine users tracking markers in a tutorial dataset of minipig feeding was ±0.062 mm in XMALab and ±0.14 mm in XrayProject. Reproducibility of 3D point locations across nine users was 10-fold greater in XMALab than in XrayProject, and six degree-of-freedom bone motions calculated with a joint coordinate system were 3- to 6-fold more reproducible in XMALab. XMALab is also suitable for tracking white or black markers in standard light videos with optional checkerboard calibration. We expect XMALab to increase both the quality and quantity of animal motion data available for comparative biomechanics research. © 2016. Published by The Company of Biologists Ltd.

  20. Validating the Accuracy of Reaction Time Assessment on Computer-Based Tablet Devices.

    PubMed

    Schatz, Philip; Ybarra, Vincent; Leitner, Donald

    2015-08-01

    Computer-based assessment has evolved to tablet-based devices. Despite the availability of tablets and "apps," there is limited research validating their use. We documented timing delays between stimulus presentation and (simulated) touch response on iOS devices (3rd- and 4th-generation Apple iPads) and Android devices (Kindle Fire, Google Nexus, Samsung Galaxy) at response intervals of 100, 250, 500, and 1,000 milliseconds (ms). Results showed significantly greater timing error on Google Nexus and Samsung tablets (81-97 ms), than Kindle Fire and Apple iPads (27-33 ms). Within Apple devices, iOS 7 obtained significantly lower timing error than iOS 6. Simple reaction time (RT) trials (250 ms) on tablet devices represent 12% to 40% error (30-100 ms), depending on the device, which decreases considerably for choice RT trials (3-5% error at 1,000 ms). Results raise implications for using the same device for serial clinical assessment of RT using tablets, as well as the need for calibration of software and hardware. © The Author(s) 2015.

  1. Development and Validation of an Instrument Measuring Theory-Based Determinants of Monitoring Obesogenic Behaviors of Pre-Schoolers among Hispanic Mothers.

    PubMed

    Branscum, Paul; Lora, Karina R

    2016-06-02

    Public health interventions are greatly needed for obesity prevention, and planning for such strategies should include community participation. The study's purpose was to develop and validate a theory-based instrument with low-income, Hispanic mothers of preschoolers, to assess theory-based determinants of maternal monitoring of child's consumption of fruits and vegetables and sugar-sweetened beverages (SSB). Nine focus groups with mothers were conducted to determine nutrition-related behaviors that mothers found as most obesogenic for their children. Next, behaviors were operationally defined and rated for importance and changeability. Two behaviors were selected for investigation (fruits and vegetable and SSB). Twenty semi-structured interviews with mothers were conducted next to develop culturally appropriate items for the instrument. Afterwards, face and content validity were established using a panel of six experts. Finally, the instrument was tested with a sample of 238 mothers. Psychometric properties evaluated included construct validity (using the maximum likelihood extraction method of factor analysis), and internal consistency reliability (Cronbach's alpha). Results suggested that all scales on the instrument were valid and reliable, except for the autonomy scales. Researchers and community planners working with Hispanic families can use this instrument to measure theory-based determinants of parenting behaviors related to preschoolers' consumption of fruits and vegetables, and SSB.

  2. Development and Validation of an Instrument Measuring Theory-Based Determinants of Monitoring Obesogenic Behaviors of Pre-Schoolers among Hispanic Mothers

    PubMed Central

    Branscum, Paul; Lora, Karina R.

    2016-01-01

    Public health interventions are greatly needed for obesity prevention, and planning for such strategies should include community participation. The study’s purpose was to develop and validate a theory-based instrument with low-income, Hispanic mothers of preschoolers, to assess theory-based determinants of maternal monitoring of child’s consumption of fruits and vegetables and sugar-sweetened beverages (SSB). Nine focus groups with mothers were conducted to determine nutrition-related behaviors that mothers found as most obesogenic for their children. Next, behaviors were operationally defined and rated for importance and changeability. Two behaviors were selected for investigation (fruits and vegetable and SSB). Twenty semi-structured interviews with mothers were conducted next to develop culturally appropriate items for the instrument. Afterwards, face and content validity were established using a panel of six experts. Finally, the instrument was tested with a sample of 238 mothers. Psychometric properties evaluated included construct validity (using the maximum likelihood extraction method of factor analysis), and internal consistency reliability (Cronbach’s alpha). Results suggested that all scales on the instrument were valid and reliable, except for the autonomy scales. Researchers and community planners working with Hispanic families can use this instrument to measure theory-based determinants of parenting behaviors related to preschoolers’ consumption of fruits and vegetables, and SSB. PMID:27271643

  3. Support Vector Data Description Model to Map Specific Land Cover with Optimal Parameters Determined from a Window-Based Validation Set.

    PubMed

    Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang

    2017-04-26

    This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient ( C ) and kernel width ( s ), in mapping homogeneous specific land cover.

  4. Proposal for risk-based scientific approach on full and partial validation for general changes in bioanalytical method.

    PubMed

    Mochizuki, Ayumi; Ieki, Katsunori; Kamimori, Hiroshi; Nagao, Akemi; Nakai, Keiko; Nakayama, Akira; Nanba, Eitaro

    2018-04-01

    The guidance and several guidelines on bioanalytical method validation, which were issued by the US FDA, EMA and Ministry of Health, Labour and Welfare, list the 'full' validation parameters; however, none of these provide any details for 'partial' validation. Japan Bioanalysis Forum approved a total of three annual discussion groups from 2012 to 2014. In the discussion groups, members from pharmaceutical companies and contract research organizations discussed the details of partial validation from a risk assessment viewpoint based on surveys focusing on bioanalysis of small molecules using LC-MS/MS in Japan. This manuscript presents perspectives and recommendations for most conceivable changes that can be made to full and partial validations by members of the discussion groups based on their experiences and discussions at the Japan Bioanalysis Forum Symposium.

  5. Achieving accurate compound concentration in cell-based screening: validation of acoustic droplet ejection technology.

    PubMed

    Grant, Richard John; Roberts, Karen; Pointon, Carly; Hodgson, Clare; Womersley, Lynsey; Jones, Darren Craig; Tang, Eric

    2009-06-01

    Compound handling is a fundamental and critical step in compound screening throughout the drug discovery process. Although most compound-handling processes within compound management facilities use 100% DMSO solvent, conventional methods of manual or robotic liquid-handling systems in screening workflows often perform dilutions in aqueous solutions to maintain solvent tolerance of the biological assay. However, the use of aqueous media in these applications can lead to suboptimal data quality due to compound carryover or precipitation during the dilution steps. In cell-based assays, this effect is worsened by the unpredictable physical characteristics of compounds and the low DMSO tolerance within the assay. In some cases, the conventional approaches using manual or automated liquid handling resulted in variable IC(50) dose responses. This study examines the cause of this variability and evaluates the accuracy of screening data in these case studies. A number of liquid-handling options have been explored to address the issues and establish a generic compound-handling workflow to support cell-based screening across our screening functions. The authors discuss the validation of the Labcyte Echo reformatter as an effective noncontact solution for generic compound-handling applications against diverse compound classes using triple-quad liquid chromatography/mass spectrometry. The successful validation and implementation challenges of this technology for direct dosing onto cells in cell-based screening is discussed.

  6. Performance of Case-Based Reasoning Retrieval Using Classification Based on Associations versus Jcolibri and FreeCBR: A Further Validation Study

    NASA Astrophysics Data System (ADS)

    Aljuboori, Ahmed S.; Coenen, Frans; Nsaif, Mohammed; Parsons, David J.

    2018-05-01

    Case-Based Reasoning (CBR) plays a major role in expert system research. However, a critical problem can be met when a CBR system retrieves incorrect cases. Class Association Rules (CARs) have been utilized to offer a potential solution in a previous work. The aim of this paper was to perform further validation of Case-Based Reasoning using a Classification based on Association Rules (CBRAR) to enhance the performance of Similarity Based Retrieval (SBR). The CBRAR strategy uses a classed frequent pattern tree algorithm (FP-CAR) in order to disambiguate wrongly retrieved cases in CBR. The research reported in this paper makes contributions to both fields of CBR and Association Rules Mining (ARM) in that full target cases can be extracted from the FP-CAR algorithm without invoking P-trees and union operations. The dataset used in this paper provided more efficient results when the SBR retrieves unrelated answers. The accuracy of the proposed CBRAR system outperforms the results obtained by existing CBR tools such as Jcolibri and FreeCBR.

  7. A novel validation and calibration method for motion capture systems based on micro-triangulation.

    PubMed

    Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M

    2018-06-06

    Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Validation of Underwater Sensor Package Using Feature Based SLAM

    PubMed Central

    Cain, Christopher; Leonessa, Alexander

    2016-01-01

    Robotic vehicles working in new, unexplored environments must be able to locate themselves in the environment while constructing a picture of the objects in the environment that could act as obstacles that would prevent the vehicles from completing their desired tasks. In enclosed environments, underwater range sensors based off of acoustics suffer performance issues due to reflections. Additionally, their relatively high cost make them less than ideal for usage on low cost vehicles designed to be used underwater. In this paper we propose a sensor package composed of a downward facing camera, which is used to perform feature tracking based visual odometry, and a custom vision-based two dimensional rangefinder that can be used on low cost underwater unmanned vehicles. In order to examine the performance of this sensor package in a SLAM framework, experimental tests are performed using an unmanned ground vehicle and two feature based SLAM algorithms, the extended Kalman filter based approach and the Rao-Blackwellized, particle filter based approach, to validate the sensor package. PMID:26999142

  9. Validation in the Absence of Observed Events

    DOE PAGES

    Lathrop, John; Ezell, Barry

    2015-07-22

    Here our paper addresses the problem of validating models in the absence of observed events, in the area of Weapons of Mass Destruction terrorism risk assessment. We address that problem with a broadened definition of “Validation,” based on “backing up” to the reason why modelers and decision makers seek validation, and from that basis re-define validation as testing how well the model can advise decision makers in terrorism risk management decisions. We develop that into two conditions: Validation must be based on cues available in the observable world; and it must focus on what can be done to affect thatmore » observable world, i.e. risk management. That in turn leads to two foci: 1.) the risk generating process, 2.) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests -- Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three key validation tests from the DOD literature: Is the model a correct representation of the simuland? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful?« less

  10. Validation in the Absence of Observed Events.

    PubMed

    Lathrop, John; Ezell, Barry

    2016-04-01

    This article addresses the problem of validating models in the absence of observed events, in the area of weapons of mass destruction terrorism risk assessment. We address that problem with a broadened definition of "validation," based on stepping "up" a level to considering the reason why decisionmakers seek validation, and from that basis redefine validation as testing how well the model can advise decisionmakers in terrorism risk management decisions. We develop that into two conditions: validation must be based on cues available in the observable world; and it must focus on what can be done to affect that observable world, i.e., risk management. That leads to two foci: (1) the real-world risk generating process, and (2) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests--Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three validation tests from the DOD literature: Is the model a correct representation of the process to be simulated? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful? © 2015 Society for Risk Analysis.

  11. Noninvasive assessment of mitral inertness [correction of inertance]: clinical results with numerical model validation.

    PubMed

    Firstenberg, M S; Greenberg, N L; Smedira, N G; McCarthy, P M; Garcia, M J; Thomas, J D

    2001-01-01

    Inertial forces (Mdv/dt) are a significant component of transmitral flow, but cannot be measured with Doppler echo. We validated a method of estimating Mdv/dt. Ten patients had a dual sensor transmitral (TM) catheter placed during cardiac surgery. Doppler and 2D echo was performed while acquiring LA and LV pressures. Mdv/dt was determined from the Bernoulli equation using Doppler velocities and TM gradients. Results were compared with numerical modeling. TM gradients (range: 1.04-14.24 mmHg) consisted of 74.0 +/- 11.0% inertial forcers (range: 0.6-12.9 mmHg). Multivariate analysis predicted Mdv/dt = -4.171(S/D (RATIO)) + 0.063(LAvolume-max) + 5. Using this equation, a strong relationship was obtained for the clinical dataset (y=0.98x - 0.045, r=0.90) and the results of numerical modeling (y=0.96x - 0.16, r=0.84). TM gradients are mainly inertial and, as validated by modeling, can be estimated with echocardiography.

  12. Body surface assessment with 3D laser-based anthropometry: reliability, validation, and improvement of empirical surface formulae.

    PubMed

    Kuehnapfel, Andreas; Ahnert, Peter; Loeffler, Markus; Scholz, Markus

    2017-02-01

    Body surface area is a physiological quantity relevant for many medical applications. In clinical practice, it is determined by empirical formulae. 3D laser-based anthropometry provides an easy and effective way to measure body surface area but is not ubiquitously available. We used data from laser-based anthropometry from a population-based study to assess validity of published and commonly used empirical formulae. We performed a large population-based study on adults collecting classical anthropometric measurements and 3D body surface assessments (N = 1435). We determined reliability of the 3D body surface assessment and validity of 18 different empirical formulae proposed in the literature. The performance of these formulae is studied in subsets of sex and BMI. Finally, improvements of parameter settings of formulae and adjustments for sex and BMI were considered. 3D body surface measurements show excellent intra- and inter-rater reliability of 0.998 (overall concordance correlation coefficient, OCCC was used as measure of agreement). Empirical formulae of Fujimoto and Watanabe, Shuter and Aslani and Sendroy and Cecchini performed best with excellent concordance with OCCC > 0.949 even in subgroups of sex and BMI. Re-parametrization of formulae and adjustment for sex and BMI slightly improved results. In adults, 3D laser-based body surface assessment is a reliable alternative to estimation by empirical formulae. However, there are empirical formulae showing excellent results even in subgroups of sex and BMI with only little room for improvement.

  13. Internal Cluster Validation on Earthquake Data in the Province of Bengkulu

    NASA Astrophysics Data System (ADS)

    Rini, D. S.; Novianti, P.; Fransiska, H.

    2018-04-01

    K-means method is an algorithm for cluster n object based on attribute to k partition, where k < n. There is a deficiency of algorithms that is before the algorithm is executed, k points are initialized randomly so that the resulting data clustering can be different. If the random value for initialization is not good, the clustering becomes less optimum. Cluster validation is a technique to determine the optimum cluster without knowing prior information from data. There are two types of cluster validation, which are internal cluster validation and external cluster validation. This study aims to examine and apply some internal cluster validation, including the Calinski-Harabasz (CH) Index, Sillhouette (S) Index, Davies-Bouldin (DB) Index, Dunn Index (D), and S-Dbw Index on earthquake data in the Bengkulu Province. The calculation result of optimum cluster based on internal cluster validation is CH index, S index, and S-Dbw index yield k = 2, DB Index with k = 6 and Index D with k = 15. Optimum cluster (k = 6) based on DB Index gives good results for clustering earthquake in the Bengkulu Province.

  14. A New MI-Based Visualization Aided Validation Index for Mining Big Longitudinal Web Trial Data

    PubMed Central

    Zhang, Zhaoyang; Fang, Hua; Wang, Honggang

    2016-01-01

    Web-delivered clinical trials generate big complex data. To help untangle the heterogeneity of treatment effects, unsupervised learning methods have been widely applied. However, identifying valid patterns is a priority but challenging issue for these methods. This paper, built upon our previous research on multiple imputation (MI)-based fuzzy clustering and validation, proposes a new MI-based Visualization-aided validation index (MIVOOS) to determine the optimal number of clusters for big incomplete longitudinal Web-trial data with inflated zeros. Different from a recently developed fuzzy clustering validation index, MIVOOS uses a more suitable overlap and separation measures for Web-trial data but does not depend on the choice of fuzzifiers as the widely used Xie and Beni (XB) index. Through optimizing the view angles of 3-D projections using Sammon mapping, the optimal 2-D projection-guided MIVOOS is obtained to better visualize and verify the patterns in conjunction with trajectory patterns. Compared with XB and VOS, our newly proposed MIVOOS shows its robustness in validating big Web-trial data under different missing data mechanisms using real and simulated Web-trial data. PMID:27482473

  15. Validation of SMAP Surface Soil Moisture Products with Core Validation Sites

    NASA Technical Reports Server (NTRS)

    Colliander, A.; Jackson, T. J.; Bindlish, R.; Chan, S.; Das, N.; Kim, S. B.; Cosh, M. H.; Dunbar, R. S.; Dang, L.; Pashaian, L.; hide

    2017-01-01

    The NASA Soil Moisture Active Passive (SMAP) mission has utilized a set of core validation sites as the primary methodology in assessing the soil moisture retrieval algorithm performance. Those sites provide well calibrated in situ soil moisture measurements within SMAP product grid pixels for diverse conditions and locations.The estimation of the average soil moisture within the SMAP product grid pixels based on in situ measurements is more reliable when location specific calibration of the sensors has been performed and there is adequate replication over the spatial domain, with an up-scaling function based on analysis using independent estimates of the soil moisture distribution. SMAP fulfilled these requirements through a collaborative CalVal Partner program.This paper presents the results from 34 candidate core validation sites for the first eleven months of the SMAP mission. As a result of the screening of the sites prior to the availability of SMAP data, out of the 34 candidate sites 18 sites fulfilled all the requirements at one of the resolution scales (at least). The rest of the sites are used as secondary information in algorithm evaluation. The results indicate that the SMAP radiometer-based soil moisture data product meets its expected performance of 0.04 cu m/cu m volumetric soil moisture (unbiased root mean square error); the combined radar-radiometer product is close to its expected performance of 0.04 cu m/cu m, and the radar-based product meets its target accuracy of 0.06 cu m/cu m (the lengths of the combined and radar-based products are truncated to about 10 weeks because of the SMAP radar failure). Upon completing the intensive CalVal phase of the mission the SMAP project will continue to enhance the products in the primary and extended geographic domains, in co-operation with the CalVal Partners, by continuing the comparisons over the existing core validation sites and inclusion of candidate sites that can address shortcomings.

  16. Office Education. North Dakota Validated Task Listing. Competency-Based Vocational Education.

    ERIC Educational Resources Information Center

    North Dakota State Board for Vocational Education, Bismarck.

    Intended to provide a base for vocational office education instructional programs at secondary and postsecondary levels in North Dakota, this task listing describes the skills needed to be performed by program completers, from the viewpoint of workers in office occupations. A listing of task validators (name, occupation, employer, business city,…

  17. Validation of a BOTDR-based system for the detection of smuggling tunnels

    NASA Astrophysics Data System (ADS)

    Elkayam, Itai; Klar, Assaf; Linker, Raphael; Marshall, Alec M.

    2010-04-01

    Cross-border smuggling tunnels enable unmonitored movement of people, drugs and weapons and pose a very serious threat to homeland security. Recently, Klar and Linker (2009) [SPIE paper No. 731603] presented an analytical study of the feasibility of a Brillouin Optical Time Domain Reflectometry (BOTDR) based system for the detection of small sized smuggling tunnels. The current study extends this work by validating the analytical models against real strain measurements in soil obtained from small scale experiments in a geotechnical centrifuge. The soil strains were obtained using an image analysis method that tracked the displacement of discrete patches of soil through a sequence of digital images of the soil around the tunnel during the centrifuge test. The results of the present study are in agreement with those of a previous study which was based on synthetic signals generated using empirical and analytical models from the literature.

  18. Physics-based method to validate and repair flaws in protein structures.

    PubMed

    Martin, Osvaldo A; Arnautova, Yelena A; Icazatti, Alejandro A; Scheraga, Harold A; Vila, Jorge A

    2013-10-15

    A method that makes use of information provided by the combination of (13)C(α) and (13)C(β) chemical shifts, computed at the density functional level of theory, enables one to (i) validate, at the residue level, conformations of proteins and detect backbone or side-chain flaws by taking into account an ensemble average of chemical shifts over all of the conformations used to represent a protein, with a sensitivity of ∼90%; and (ii) provide a set of (χ1/χ2) torsional angles that leads to optimal agreement between the observed and computed (13)C(α) and (13)C(β) chemical shifts. The method has been incorporated into the CheShift-2 protein validation Web server. To test the reliability of the provided set of (χ1/χ2) torsional angles, the side chains of all reported conformations of five NMR-determined protein models were refined by a simple routine, without using NOE-based distance restraints. The refinement of each of these five proteins leads to optimal agreement between the observed and computed (13)C(α) and (13)C(β) chemical shifts for ∼94% of the flaws, on average, without introducing a significantly large number of violations of the NOE-based distance restraints for a distance range ≤ 0.5 , in which the largest number of distance violations occurs. The results of this work suggest that use of the provided set of (χ1/χ2) torsional angles together with other observables, such as NOEs, should lead to a fast and accurate refinement of the side-chain conformations of protein models.

  19. Validating the usability of an interactive Earth Observation based web service for landslide investigation

    NASA Astrophysics Data System (ADS)

    Albrecht, Florian; Weinke, Elisabeth; Eisank, Clemens; Vecchiotti, Filippo; Hölbling, Daniel; Friedl, Barbara; Kociu, Arben

    2017-04-01

    Regional authorities and infrastructure maintainers in almost all mountainous regions of the Earth need detailed and up-to-date landslide inventories for hazard and risk management. Landslide inventories usually are compiled through ground surveys and manual image interpretation following landslide triggering events. We developed a web service that uses Earth Observation (EO) data to support the mapping and monitoring tasks for improving the collection of landslide information. The planned validation of the EO-based web service does not only cover the analysis of the achievable landslide information quality but also the usability and user friendliness of the user interface. The underlying validation criteria are based on the user requirements and the defined tasks and aims in the work description of the FFG project Land@Slide (EO-based landslide mapping: from methodological developments to automated web-based information delivery). The service will be validated in collaboration with stakeholders, decision makers and experts. Users are requested to test the web service functionality and give feedback with a web-based questionnaire by following the subsequently described workflow. The users will operate the web-service via the responsive user interface and can extract landslide information from EO data. They compare it to reference data for quality assessment, for monitoring changes and for assessing landslide-affected infrastructure. An overview page lets the user explore a list of example projects with resulting landslide maps and mapping workflow descriptions. The example projects include mapped landslides in several test areas in Austria and Northern Italy. Landslides were extracted from high resolution (HR) and very high resolution (VHR) satellite imagery, such as Landsat, Sentinel-2, SPOT-5, WorldView-2/3 or Pléiades. The user can create his/her own project by selecting available satellite imagery or by uploading new data. Subsequently, a new landslide

  20. Towards development and validation of an intraoperative assessment tool for robot-assisted radical prostatectomy training: results of a Delphi study

    PubMed Central

    Morris, Christopher; Hoogenes, Jen; Shayegan, Bobby; Matsumoto, Edward D.

    2017-01-01

    ABSTRACT Introduction As urology training shifts toward competency-based frameworks, the need for tools for high stakes assessment of trainees is crucial. Validated assessment metrics are lacking for many robot-assisted radical prostatectomy (RARP). As it is quickly becoming the gold standard for treatment of localized prostate cancer, the development and validation of a RARP assessment tool for training is timely. Materials and methods We recruited 13 expert RARP surgeons from the United States and Canada to serve as our Delphi panel. Using an initial inventory developed via a modified Delphi process with urology residents, fellows, and staff at our institution, panelists iteratively rated each step and sub-step on a 5-point Likert scale of agreement for inclusion in the final assessment tool. Qualitative feedback was elicited for each item to determine proper step placement, wording, and suggestions. Results Panelist’s responses were compiled and the inventory was edited through three iterations, after which 100% consensus was achieved. The initial inventory steps were decreased by 13% and a skip pattern was incorporated. The final RARP stepwise inventory was comprised of 13 critical steps with 52 sub-steps. There was no attrition throughout the Delphi process. Conclusions Our Delphi study resulted in a comprehensive inventory of intraoperative RARP steps with excellent consensus. This final inventory will be used to develop a valid and psychometrically sound intraoperative assessment tool for use during RARP training and evaluation, with the aim of increasing competency of all trainees. PMID:28379668

  1. A Reliability and Validity of an Instrument to Evaluate the School-Based Assessment System: A Pilot Study

    ERIC Educational Resources Information Center

    Ghazali, Nor Hasnida Md

    2016-01-01

    A valid, reliable and practical instrument is needed to evaluate the implementation of the school-based assessment (SBA) system. The aim of this study is to develop and assess the validity and reliability of an instrument to measure the perception of teachers towards the SBA implementation in schools. The instrument is developed based on a…

  2. Development and validation of a web-based questionnaire for surveying the health and working conditions of high-performance marine craft populations

    PubMed Central

    de Alwis, Manudul Pahansen; Lo Martire, Riccardo; Äng, Björn O; Garme, Karl

    2016-01-01

    Background High-performance marine craft crews are susceptible to various adverse health conditions caused by multiple interactive factors. However, there are limited epidemiological data available for assessment of working conditions at sea. Although questionnaire surveys are widely used for identifying exposures, outcomes and associated risks with high accuracy levels, until now, no validated epidemiological tool exists for surveying occupational health and performance in these populations. Aim To develop and validate a web-based questionnaire for epidemiological assessment of occupational and individual risk exposure pertinent to the musculoskeletal health conditions and performance in high-performance marine craft populations. Method A questionnaire for investigating the association between work-related exposure, performance and health was initially developed by a consensus panel under four subdomains, viz. demography, lifestyle, work exposure and health and systematically validated by expert raters for content relevance and simplicity in three consecutive stages, each iteratively followed by a consensus panel revision. The item content validity index (I-CVI) was determined as the proportion of experts giving a rating of 3 or 4. The scale content validity index (S-CVI/Ave) was computed by averaging the I-CVIs for the assessment of the questionnaire as a tool. Finally, the questionnaire was pilot tested. Results The S-CVI/Ave increased from 0.89 to 0.96 for relevance and from 0.76 to 0.94 for simplicity, resulting in 36 items in the final questionnaire. The pilot test confirmed the feasibility of the questionnaire. Conclusions The present study shows that the web-based questionnaire fulfils previously published validity acceptance criteria and is therefore considered valid and feasible for the empirical surveying of epidemiological aspects among high-performance marine craft crews and similar populations. PMID:27324717

  3. How well do adolescents recall use of mobile telephones? Results of a validation study

    PubMed Central

    2009-01-01

    Background In the last decade mobile telephone use has become more widespread among children. Concerns expressed about possible health risks have led to epidemiological studies investigating adverse health outcomes associated with mobile telephone use. Most epidemiological studies have relied on self reported questionnaire responses to determine individual exposure. We sought to validate the accuracy of self reported adolescent mobile telephone use. Methods Participants were recruited from year 7 secondary school students in Melbourne, Australia. Adolescent recall of mobile telephone use was assessed using a self administered questionnaire which asked about number and average duration of calls per week. Validation of self reports was undertaken using Software Modified Phones (SMPs) which logged exposure details such as number and duration of calls. Results A total of 59 adolescents participated (39% boys, 61% girls). Overall a modest but significant rank correlation was found between self and validated number of voice calls (ρ = 0.3, P = 0.04) with a sensitivity of 57% and specificity of 66%. Agreement between SMP measured and self reported duration of calls was poorer (ρ = 0.1, P = 0.37). Participants whose parents belonged to the 4th socioeconomic stratum recalled mobile phone use better than others (ρ = 0.6, P = 0.01). Conclusion Adolescent recall of mobile telephone use was only modestly accurate. Caution is warranted in interpreting results of epidemiological studies investigating health effects of mobile phone use in this age group. PMID:19523193

  4. miRTarBase update 2018: a resource for experimentally validated microRNA-target interactions.

    PubMed

    Chou, Chih-Hung; Shrestha, Sirjana; Yang, Chi-Dung; Chang, Nai-Wen; Lin, Yu-Ling; Liao, Kuang-Wen; Huang, Wei-Chi; Sun, Ting-Hsuan; Tu, Siang-Jyun; Lee, Wei-Hsiang; Chiew, Men-Yee; Tai, Chun-San; Wei, Ting-Yen; Tsai, Tzi-Ren; Huang, Hsin-Tzu; Wang, Chung-Yu; Wu, Hsin-Yi; Ho, Shu-Yi; Chen, Pin-Rong; Chuang, Cheng-Hsun; Hsieh, Pei-Jung; Wu, Yi-Shin; Chen, Wen-Liang; Li, Meng-Ju; Wu, Yu-Chun; Huang, Xin-Yi; Ng, Fung Ling; Buddhakosai, Waradee; Huang, Pei-Chun; Lan, Kuan-Chun; Huang, Chia-Yen; Weng, Shun-Long; Cheng, Yeong-Nan; Liang, Chao; Hsu, Wen-Lian; Huang, Hsien-Da

    2018-01-04

    MicroRNAs (miRNAs) are small non-coding RNAs of ∼ 22 nucleotides that are involved in negative regulation of mRNA at the post-transcriptional level. Previously, we developed miRTarBase which provides information about experimentally validated miRNA-target interactions (MTIs). Here, we describe an updated database containing 422 517 curated MTIs from 4076 miRNAs and 23 054 target genes collected from over 8500 articles. The number of MTIs curated by strong evidence has increased ∼1.4-fold since the last update in 2016. In this updated version, target sites validated by reporter assay that are available in the literature can be downloaded. The target site sequence can extract new features for analysis via a machine learning approach which can help to evaluate the performance of miRNA-target prediction tools. Furthermore, different ways of browsing enhance user browsing specific MTIs. With these improvements, miRTarBase serves as more comprehensively annotated, experimentally validated miRNA-target interactions databases in the field of miRNA related research. miRTarBase is available at http://miRTarBase.mbc.nctu.edu.tw/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Accelerating the Development and Validation of New Value-Based Diagnostics by Leveraging Biobanks.

    PubMed

    Schneider, Daniel; Riegman, Peter H J; Cronin, Maureen; Negrouk, Anastassia; Moch, Holger; Balling, Rudi; Penault-Llorca, Frederiques; Zatloukal, Kurt; Horgan, Denis

    The challenges faced in developing value-based diagnostics has resulted in few of these tests reaching the clinic, leaving many treatment modalities without matching diagnostics to select patients for particular therapies. Many patients receive therapies from which they are unlikely to benefit, resulting in worse outcomes and wasted health care resources. The paucity of value-based diagnostics is a result of the scientific challenges in developing predictive markers, specifically: (1) complex biology, (2) a limited research infrastructure supporting diagnostic development, and (3) the lack of incentives for diagnostic developers to invest the necessary resources. Better access to biospecimens can address some of these challenges. Methodologies developed to evaluate biomarkers from biospecimens archived from patients enrolled in randomized clinical trials offer the greatest opportunity to develop and validate high-value molecular diagnostics. An alternative opportunity is to access high-quality biospecimens collected from large public and private longitudinal observational cohorts such as the UK Biobank, the US Million Veteran Program, the UK 100,000 Genomes Project, or the French E3N cohort. Value-based diagnostics can be developed to work in a range of samples including blood, serum, plasma, urine, and tumour tissue, and better access to these high-quality biospecimens with clinical data can facilitate biomarker research. © 2016 S. Karger AG, Basel.

  6. LIVVkit: An extensible, python-based, land ice verification and validation toolkit for ice sheet models

    NASA Astrophysics Data System (ADS)

    Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.; Price, Stephen; Hoffman, Matthew; Lipscomb, William H.; Fyke, Jeremy; Vargo, Lauren; Boghozian, Adrianna; Norman, Matthew; Worley, Patrick H.

    2017-06-01

    To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptops to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Ultimately, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.

  7. Investigation of a Sybr-Green-Based Method to Validate DNA Sequences for DNA Computing

    DTIC Science & Technology

    2005-05-01

    OF A SYBR-GREEN-BASED METHOD TO VALIDATE DNA SEQUENCES FOR DNA COMPUTING 6. AUTHOR(S) Wendy Pogozelski, Salvatore Priore, Matthew Bernard ...simulated annealing. Biochemistry, 35, 14077-14089. 15 Pogozelski, W.K., Bernard , M.P. and Macula, A. (2004) DNA code validation using...and Clark, B.F.C. (eds) In RNA Biochemistry and Biotechnology, NATO ASI Series, Kluwer Academic Publishers. Zucker, M. and Stiegler , P. (1981

  8. Which Dimensions of Patient-Centeredness Matter? - Results of a Web-Based Expert Delphi Survey

    PubMed Central

    Zill, Jördis M.; Scholl, Isabelle; Härter, Martin; Dirmaier, Jörg

    2015-01-01

    Background Present models and definitions of patient-centeredness revealed a lack of conceptual clarity. Based on a prior systematic literature review, we developed an integrative model with 15 dimensions of patient-centeredness. The aims of this study were to 1) validate, and 2) prioritize these dimensions. Method A two-round web-based Delphi study was conducted. 297 international experts were invited to participate. In round one they were asked to 1) give an individual rating on a nine-point-scale on relevance and clarity of the dimensions, 2) add missing dimensions, and 3) prioritize the dimensions. In round two, experts received feedback about the results of round one and were asked to reflect and re-rate their own results. The cut-off for the validation of a dimension was a median < 7 on one of the criteria. Results 105 experts participated in round one and 71 in round two. In round one, one new dimension was suggested and included for discussion in round two. In round two, this dimension did not reach sufficient ratings to be included in the model. Eleven dimensions reached a median ≥ 7 on both criteria (relevance and clarity). Four dimensions had a median < 7 on one or both criteria. The five dimensions rated as most important were: patient as a unique person, patient involvement in care, patient information, clinician-patient communication and patient empowerment. Discussion 11 out of the 15 dimensions have been validated through experts’ ratings. Further research on the four dimensions that received insufficient ratings is recommended. The priority order of the dimensions can help researchers and clinicians to focus on the most important dimensions of patient-centeredness. Overall, the model provides a useful framework that can be used in the development of measures, interventions, and medical education curricula, as well as the adoption of a new perspective in health policy. PMID:26539990

  9. What should we mean by empirical validation in hypnotherapy: evidence-based practice in clinical hypnosis.

    PubMed

    Alladin, Assen; Sabatini, Linda; Amundson, Jon K

    2007-04-01

    This paper briefly surveys the trend of and controversy surrounding empirical validation in psychotherapy. Empirical validation of hypnotherapy has paralleled the practice of validation in psychotherapy and the professionalization of clinical psychology, in general. This evolution in determining what counts as evidence for bona fide clinical practice has gone from theory-driven clinical approaches in the 1960s and 1970s through critical attempts at categorization of empirically supported therapies in the 1990s on to the concept of evidence-based practice in 2006. Implications of this progression in professional psychology are discussed in the light of hypnosis's current quest for validation and empirical accreditation.

  10. Reliability and validity of expert assessment based on airborne and urinary measures of nickel and chromium exposure in the electroplating industry.

    PubMed

    Chen, Yu-Cheng; Coble, Joseph B; Deziel, Nicole C; Ji, Bu-Tian; Xue, Shouzheng; Lu, Wei; Stewart, Patricia A; Friesen, Melissa C

    2014-11-01

    The reliability and validity of six experts' exposure ratings were evaluated for 64 nickel-exposed and 72 chromium-exposed workers from six Shanghai electroplating plants based on airborne and urinary nickel and chromium measurements. Three industrial hygienists and three occupational physicians independently ranked the exposure intensity of each metal on an ordinal scale (1-4) for each worker's job in two rounds: the first round was based on responses to an occupational history questionnaire and the second round also included responses to an electroplating industry-specific questionnaire. The Spearman correlation (r(s)) was used to compare each rating's validity to its corresponding subject-specific arithmetic mean of four airborne or four urinary measurements. Reliability was moderately high (weighted kappa range=0.60-0.64). Validity was poor to moderate (r(s)=-0.37-0.46) for both airborne and urinary concentrations of both metals. For airborne nickel concentrations, validity differed by plant. For dichotomized metrics, sensitivity and specificity were higher based on urinary measurements (47-78%) than airborne measurements (16-50%). Few patterns were observed by metal, assessment round, or expert type. These results suggest that, for electroplating exposures, experts can achieve moderately high agreement and (reasonably) distinguish between low and high exposures when reviewing responses to in-depth questionnaires used in population-based case-control studies.

  11. Reliability and validity of expert assessment based on airborne and urinary measures of nickel and chromium exposure in the electroplating industry

    PubMed Central

    Chen, Yu-Cheng; Coble, Joseph B; Deziel, Nicole C.; Ji, Bu-Tian; Xue, Shouzheng; Lu, Wei; Stewart, Patricia A; Friesen, Melissa C

    2014-01-01

    The reliability and validity of six experts’ exposure ratings were evaluated for 64 nickel-exposed and 72 chromium-exposed workers from six Shanghai electroplating plants based on airborne and urinary nickel and chromium measurements. Three industrial hygienists and three occupational physicians independently ranked the exposure intensity of each metal on an ordinal scale (1–4) for each worker's job in two rounds: the first round was based on responses to an occupational history questionnaire and the second round also included responses to an electroplating industry-specific questionnaire. Spearman correlation (rs) was used to compare each rating's validity to its corresponding subject-specific arithmetic mean of four airborne or four urinary measurements. Reliability was moderately-high (weighted kappa range=0.60–0.64). Validity was poor to moderate (rs= -0.37–0.46) for both airborne and urinary concentrations of both metals. For airborne nickel concentrations, validity differed by plant. For dichotomized metrics, sensitivity and specificity were higher based on urinary measurements (47–78%) than airborne measurements (16–50%). Few patterns were observed by metal, assessment round, or expert type. These results suggest that, for electroplating exposures, experts can achieve moderately-high agreement and (reasonably) distinguish between low and high exposures when reviewing responses to in-depth questionnaires used in population-based case-control studies. PMID:24736099

  12. Rocket-Based Combined Cycle Engine Technology Development: Inlet CFD Validation and Application

    NASA Technical Reports Server (NTRS)

    DeBonis, J. R.; Yungster, S.

    1996-01-01

    A CFD methodology has been developed for inlet analyses of Rocket-Based Combined Cycle (RBCC) Engines. A full Navier-Stokes analysis code, NPARC, was used in conjunction with pre- and post-processing tools to obtain a complete description of the flow field and integrated inlet performance. This methodology was developed and validated using results from a subscale test of the inlet to a RBCC 'Strut-Jet' engine performed in the NASA Lewis 1 x 1 ft. supersonic wind tunnel. Results obtained from this study include analyses at flight Mach numbers of 5 and 6 for super-critical operating conditions. These results showed excellent agreement with experimental data. The analysis tools were also used to obtain pre-test performance and operability predictions for the RBCC demonstrator engine planned for testing in the NASA Lewis Hypersonic Test Facility. This analysis calculated the baseline fuel-off internal force of the engine which is needed to determine the net thrust with fuel on.

  13. Exploring Validity of Computer-Based Test Scores with Examinees' Response Behaviors and Response Times

    ERIC Educational Resources Information Center

    Sahin, Füsun

    2017-01-01

    Examining the testing processes, as well as the scores, is needed for a complete understanding of validity and fairness of computer-based assessments. Examinees' rapid-guessing and insufficient familiarity with computers have been found to be major issues that weaken the validity arguments of scores. This study has three goals: (a) improving…

  14. Validation of Remote Sensing Retrieval Products using Data from a Wireless Sensor-Based Online Monitoring in Antarctica.

    PubMed

    Li, Xiuhong; Cheng, Xiao; Yang, Rongjin; Liu, Qiang; Qiu, Yubao; Zhang, Jialin; Cai, Erli; Zhao, Long

    2016-11-17

    Of the modern technologies in polar-region monitoring, the remote sensing technology that can instantaneously form large-scale images has become much more important in helping acquire parameters such as the freezing and melting of ice as well as the surface temperature, which can be used in the research of global climate change, Antarctic ice sheet responses, and cap formation and evolution. However, the acquirement of those parameters is impacted remarkably by the climate and satellite transit time which makes it almost impossible to have timely and continuous observation data. In this research, a wireless sensor-based online monitoring platform (WSOOP) for the extreme polar environment is applied to obtain a long-term series of data which is site-specific and continuous in time. Those data are compared and validated with the data from a weather station at Zhongshan Station Antarctica and the result shows an obvious correlation. Then those data are used to validate the remote sensing products of the freezing and melting of ice and the surface temperature and the result also indicated a similar correlation. The experiment in Antarctica has proven that WSOOP is an effective system to validate remotely sensed data in the polar region.

  15. Continental-scale Validation of MODIS-based and LEDAPS Landsat ETM+ Atmospheric Correction Methods

    NASA Technical Reports Server (NTRS)

    Ju, Junchang; Roy, David P.; Vermote, Eric; Masek, Jeffrey; Kovalskyy, Valeriy

    2012-01-01

    The potential of Landsat data processing to provide systematic continental scale products has been demonstrated by several projects including the NASA Web-enabled Landsat Data (WELD) project. The recent free availability of Landsat data increases the need for robust and efficient atmospheric correction algorithms applicable to large volume Landsat data sets. This paper compares the accuracy of two Landsat atmospheric correction methods: a MODIS-based method and the Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) method. Both methods are based on the 6SV radiative transfer code but have different atmospheric characterization approaches. The MODIS-based method uses the MODIS Terra derived dynamic aerosol type, aerosol optical thickness, and water vapor to atmospherically correct ETM+ acquisitions in each coincident orbit. The LEDAPS method uses aerosol characterizations derived independently from each Landsat acquisition and assumes a fixed continental aerosol type and uses ancillary water vapor. Validation results are presented comparing ETM+ atmospherically corrected data generated using these two methods with AERONET corrected ETM+ data for 95 10 km×10 km 30 m subsets, a total of nearly 8 million 30 m pixels, located across the conterminous United States. The results indicate that the MODIS-based method has better accuracy than the LEDAPS method for the ETM+ red and longer wavelength bands.

  16. Construct and face validity of a virtual reality-based camera navigation curriculum.

    PubMed

    Shetty, Shohan; Panait, Lucian; Baranoski, Jacob; Dudrick, Stanley J; Bell, Robert L; Roberts, Kurt E; Duffy, Andrew J

    2012-10-01

    Camera handling and navigation are essential skills in laparoscopic surgery. Surgeons rely on camera operators, usually the least experienced members of the team, for visualization of the operative field. Essential skills for camera operators include maintaining orientation, an effective horizon, appropriate zoom control, and a clean lens. Virtual reality (VR) simulation may be a useful adjunct to developing camera skills in a novice population. No standardized VR-based camera navigation curriculum is currently available. We developed and implemented a novel curriculum on the LapSim VR simulator platform for our residents and students. We hypothesize that our curriculum will demonstrate construct and face validity in our trainee population, distinguishing levels of laparoscopic experience as part of a realistic training curriculum. Overall, 41 participants with various levels of laparoscopic training completed the curriculum. Participants included medical students, surgical residents (Postgraduate Years 1-5), fellows, and attendings. We stratified subjects into three groups (novice, intermediate, and advanced) based on previous laparoscopic experience. We assessed face validity with a questionnaire. The proficiency-based curriculum consists of three modules: camera navigation, coordination, and target visualization using 0° and 30° laparoscopes. Metrics include time, target misses, drift, path length, and tissue contact. We analyzed data using analysis of variance and Student's t-test. We noted significant differences in repetitions required to complete the curriculum: 41.8 for novices, 21.2 for intermediates, and 11.7 for the advanced group (P < 0.05). In the individual modules, coordination required 13.3 attempts for novices, 4.2 for intermediates, and 1.7 for the advanced group (P < 0.05). Target visualization required 19.3 attempts for novices, 13.2 for intermediates, and 8.2 for the advanced group (P < 0.05). Participants believe that training improves

  17. 49 CFR 40.160 - What does the MRO do when a valid test result cannot be produced and a negative result is required?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 1 2013-10-01 2013-10-01 false What does the MRO do when a valid test result cannot be produced and a negative result is required? 40.160 Section 40.160 Transportation Office of the Secretary of Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Medical Review Officers and the Verification...

  18. Validation of ELDO approaches for retrospective assessment of cumulative eye lens doses of interventional cardiologists-results from DoReMi project.

    PubMed

    Domienik, J; Farah, J; Struelens, L

    2016-12-01

    The first validation results of the two approaches developed in the ELDO project for retrospective assessment of eye lens doses for interventional cardiologists (ICs) are presented in this paper. The first approach (a) is based on both the readings from the routine whole body dosimeter worn above the lead apron and procedure-dependent conversion coefficients, while the second approach (b) is based on detailed information related to the occupational exposure history of the ICs declared in a questionnaire and eye lens dose records obtained from the relevant literature. The latter approach makes use of various published eye lens doses per procedure as well as the appropriate correction factors which account for the use of radiation protective tools designed to protect the eye lens. To validate both methodologies, comprehensive measurements were performed in several Polish clinics among recruited physicians. Two dosimeters measuring whole body and eye lens doses were worn by every physician for at least two months. The estimated cumulative eye lens doses, calculated from both approaches, were then compared against the measured eye lens dose value for every physician separately. Both approaches results in comparable estimates of eye lens doses and tend to overestimate rather than underestimate the eye lens doses. The measured and estimated doses do not differ, on average, by a factor higher than 2.0 in 85% and 62% of the cases used to validate approach (a) and (b), respectively. In specific cases, however, the estimated doses differ from the measured ones by as much as a factor of 2.7 and 5.1 for method (a) and (b), respectively. As such, the two approaches can be considered accurate when retrospectively estimating the eye lens doses for ICs and will be of great benefit for ongoing epidemiological studies.

  19. Validity and Reliability of Field-Based Measures for Assessing Movement Skill Competency in Lifelong Physical Activities: A Systematic Review.

    PubMed

    Hulteen, Ryan M; Lander, Natalie J; Morgan, Philip J; Barnett, Lisa M; Robertson, Samuel J; Lubans, David R

    2015-10-01

    It has been suggested that young people should develop competence in a variety of 'lifelong physical activities' to ensure that they can be active across the lifespan. The primary aim of this systematic review is to report the methodological properties, validity, reliability, and test duration of field-based measures that assess movement skill competency in lifelong physical activities. A secondary aim was to clearly define those characteristics unique to lifelong physical activities. A search of four electronic databases (Scopus, SPORTDiscus, ProQuest, and PubMed) was conducted between June 2014 and April 2015 with no date restrictions. Studies addressing the validity and/or reliability of lifelong physical activity tests were reviewed. Included articles were required to assess lifelong physical activities using process-oriented measures, as well as report either one type of validity or reliability. Assessment criteria for methodological quality were adapted from a checklist used in a previous review of sport skill outcome assessments. Movement skill assessments for eight different lifelong physical activities (badminton, cycling, dance, golf, racquetball, resistance training, swimming, and tennis) in 17 studies were identified for inclusion. Methodological quality, validity, reliability, and test duration (time to assess a single participant), for each article were assessed. Moderate to excellent reliability results were found in 16 of 17 studies, with 71% reporting inter-rater reliability and 41% reporting intra-rater reliability. Only four studies in this review reported test-retest reliability. Ten studies reported validity results; content validity was cited in 41% of these studies. Construct validity was reported in 24% of studies, while criterion validity was only reported in 12% of studies. Numerous assessments for lifelong physical activities may exist, yet only assessments for eight lifelong physical activities were included in this review

  20. Development and community-based validation of the IDEA study Instrumental Activities of Daily Living (IDEA-IADL) questionnaire

    PubMed Central

    Collingwood, Cecilia; Paddick, Stella-Maria; Kisoli, Aloyce; Dotchin, Catherine L.; Gray, William K.; Mbowe, Godfrey; Mkenda, Sarah; Urasa, Sarah; Mushi, Declare; Chaote, Paul; Walker, Richard W.

    2014-01-01

    Background The dementia diagnosis gap in sub-Saharan Africa (SSA) is large, partly due to difficulties in assessing function, an essential step in diagnosis. Objectives As part of the Identification and Intervention for Dementia in Elderly Africans (IDEA) study, to develop, pilot, and validate an Instrumental Activities of Daily Living (IADL) questionnaire for use in a rural Tanzanian population to assist in the identification of people with dementia alongside cognitive screening. Design The questionnaire was developed at a workshop for rural primary healthcare workers, based on culturally appropriate roles and usual activities of elderly people in this community. It was piloted in 52 individuals under follow-up from a dementia prevalence study. Validation subsequently took place during a community dementia-screening programme. Construct validation against gold standard clinical dementia diagnosis using DSM-IV criteria was carried out on a stratified sample of the cohort and validity assessed using area under the receiver operating characteristic (AUROC) curve analysis. Results An 11-item questionnaire (IDEA-IADL) was developed after pilot testing. During formal validation on 130 community-dwelling elderly people who presented for screening, the AUROC curve was 0.896 for DSM-IV dementia when used in isolation and 0.937 when used in conjunction with the IDEA cognitive screen, previously validated in Tanzania. The internal consistency was 0.959. Performance on the IDEA-IADL was not biased with regard to age, gender or education level. Conclusions The IDEA-IADL questionnaire appears to be a useful aid to dementia screening in this setting. Further validation in other healthcare settings in SSA is required. PMID:25537940

  1. Validation of internet-based self-reported anthropometric, demographic data and participant identity in the Food4Me study

    USDA-ARS?s Scientific Manuscript database

    BACKGROUND In e-health intervention studies, there are concerns about the reliability of internet-based, self-reported (SR) data and about the potential for identity fraud. This study introduced and tested a novel procedure for assessing the validity of internet-based, SR identity and validated anth...

  2. Design for validation: An approach to systems validation

    NASA Technical Reports Server (NTRS)

    Carter, William C.; Dunham, Janet R.; Laprie, Jean-Claude; Williams, Thomas; Howden, William; Smith, Brian; Lewis, Carl M. (Editor)

    1989-01-01

    Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center.

  3. Recommendations for the validation of cell-based assays used for the detection of neutralizing antibody immune responses elicited against biological therapeutics.

    PubMed

    Gupta, Shalini; Devanarayan, Viswanath; Finco, Deborah; Gunn, George R; Kirshner, Susan; Richards, Susan; Rup, Bonita; Song, An; Subramanyam, Meena

    2011-07-15

    The administration of biological therapeutics may result in the development of anti-drug antibodies (ADAs) in treated subjects. In some cases, ADA responses may result in the loss of therapeutic efficacy due to the formation of neutralizing ADAs (NAbs). An important characteristic of anti-drug NAbs is their direct inhibitory effect on the pharmacological activity of the therapeutic. Neutralizing antibody responses are of particular concern for biologic products with an endogenous homolog whose activity can be potentially dampened or completely inhibited by the NAbs leading to an autoimmune-type deficiency syndrome. Therefore, it is important that ADAs are detected and characterized appropriately using sensitive and reliable methods. The design, development and optimization of cell-based assays used for detection of NAbs have been published previously by Gupta et al. 2007 [1]. This paper provides recommendations on best practices for the validation of cell-based NAb assay and suggested validation parameters based on the experience of the authors. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. An ecologically valid performance-based social functioning assessment battery for schizophrenia.

    PubMed

    Shi, Chuan; He, Yi; Cheung, Eric F C; Yu, Xin; Chan, Raymond C K

    2013-12-30

    Psychiatrists pay more attention to the social functioning outcome of schizophrenia nowadays. How to evaluate the real world function among schizophrenia is a challenging task due to culture difference, there is no such kind of instrument in terms of the Chinese setting. This study aimed to report the validation of an ecologically valid performance-based everyday functioning assessment for schizophrenia, namely the Beijing Performance-based Functional Ecological Test (BJ-PERFECT). Fifty community-dwelling adults with schizophrenia and 37 healthy controls were recruited. Fifteen of the healthy controls were re-tested one week later. All participants were administered the University of California, San Diego, Performance-based Skill Assessment-Brief version (UPSA-B) and the MATRICS Consensus Cognitive Battery (MCCB). The finalized assessment included three subdomains: transportation, financial management and work ability. The test-retest and inter-rater reliabilities were good. The total score significantly correlated with the UPSA-B. The performance of individuals with schizophrenia was significantly more impaired than healthy controls, especially in the domain of work ability. Among individuals with schizophrenia, functional outcome was influenced by premorbid functioning, negative symptoms and neurocognition such as processing speed, visual learning and attention/vigilance. © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Validation of the Portuguese version of the Evidence-Based Practice Questionnaire

    PubMed Central

    Pereira, Rui Pedro Gomes; Guerra, Ana Cristina Pinheiro; Cardoso, Maria José da Silva Peixoto de Oliveira; dos Santos, Alzira Teresa Vieira Martins Ferreira; de Figueiredo, Maria do Céu Aguiar Barbieri; Carneiro, António Cândido Vaz

    2015-01-01

    OBJECTIVES: to describe the process of translation and linguistic and cultural validation of the Evidence Based Practice Questionnaire for the Portuguese context: Questionário de Eficácia Clínica e Prática Baseada em Evidências (QECPBE). METHOD: a methodological and cross-sectional study was developed. The translation and back translation was performed according to traditional standards. Principal Components Analysis with orthogonal rotation according to the Varimax method was used to verify the QECPBE's psychometric characteristics, followed by confirmatory factor analysis. Internal consistency was determined by Cronbach's alpha. Data were collected between December 2013 and February 2014. RESULTS: 358 nurses delivering care in a hospital facility in North of Portugal participated in the study. QECPBE contains 20 items and three subscales: Practice (α=0.74); Attitudes (α=0.75); Knowledge/Skills and Competencies (α=0.95), presenting an overall internal consistency of α=0.74. The tested model explained 55.86% of the variance and presented good fit: χ2(167)=520.009; p = 0.0001; χ2df=3.114; CFI=0.908; GFI=0.865; PCFI=0.798; PGFI=0.678; RMSEA=0.077 (CI90%=0.07-0.08). CONCLUSION: confirmatory factor analysis revealed the questionnaire is valid and appropriate to be used in the studied context. PMID:26039307

  6. Multi-institutional validation of a web-based core competency assessment system.

    PubMed

    Tabuenca, Arnold; Welling, Richard; Sachdeva, Ajit K; Blair, Patrice G; Horvath, Karen; Tarpley, John; Savino, John A; Gray, Richard; Gulley, Julie; Arnold, Teresa; Wolfe, Kevin; Risucci, Donald A

    2007-01-01

    The Association of Program Directors in Surgery and the Division of Education of the American College of Surgeons developed and implemented a web-based system for end-of-rotation faculty assessment of ACGME core competencies of residents. This study assesses its reliability and validity across multiple programs. Each assessment included ratings (1-5 scale) on 23 items reflecting the 6 core competencies. A total of 4241 end-of-rotation assessments were completed for 332 general surgery residents (> or =5 evaluations each) at 5 sites during the 2004-2005 and 2005-2006 academic years. The mean rating for each resident on each item was computed for each academic year. The mean rating of items representing each competency was computed for each resident. Additional data included USMLE and ABSITE scores, PGY, and status in program (categorical, designated preliminary, and undesignated preliminary). Coefficient alpha was greater than 0.90 for each competency score. Mean ratings for each competency increased significantly (p < 0.01) as a function of PGY. Mean ratings for professionalism and interpersonal/communication skills (IPC) were significantly higher than all other competencies at all PGY levels. Competency ratings of PGY 1 residents correlated significantly with USMLE Step I, ranging from (r = 0.26, p < 0.01) for Professionalism to (r = 0.41, p < 0.001) for Systems-Based Practice. Ratings of Knowledge (r = 0.31, p < 0.01), Practice-Based Learning & Improvement (PBLI; r = 0.22, p < 0.05), and Systems-Based Practice (r = 0.20, p < 0.05) correlated significantly with 2005 ABSITE Total Percentile. Ratings of all competencies correlated significantly with the 2006 ABSITE Total Percentile Score (range: r = 0.20, p < 0.05 for professionalism to r = 0.35, p < 0.001 for knowledge). Categorical and designated preliminary residents received significantly higher ratings (p < 0.05) than nondesignated preliminaries for knowledge, patient care, PBLI, and systems-based practice only

  7. An entropy-based nonparametric test for the validation of surrogate endpoints.

    PubMed

    Miao, Xiaopeng; Wang, Yong-Cheng; Gangopadhyay, Ashis

    2012-06-30

    We present a nonparametric test to validate surrogate endpoints based on measure of divergence and random permutation. This test is a proposal to directly verify the Prentice statistical definition of surrogacy. The test does not impose distributional assumptions on the endpoints, and it is robust to model misspecification. Our simulation study shows that the proposed nonparametric test outperforms the practical test of the Prentice criterion in terms of both robustness of size and power. We also evaluate the performance of three leading methods that attempt to quantify the effect of surrogate endpoints. The proposed method is applied to validate magnetic resonance imaging lesions as the surrogate endpoint for clinical relapses in a multiple sclerosis trial. Copyright © 2012 John Wiley & Sons, Ltd.

  8. The Mastery Rubric for Evidence-Based Medicine: Institutional Validation via Multidimensional Scaling.

    PubMed

    Tractenberg, Rochelle E; Gushta, Matthew M; Weinfeld, Jeffrey M

    2016-01-01

    CONSTRUCT: In this study we describe a multidimensional scaling (MDS) exercise to validate the curricular elements composing a new Mastery Rubric (MR) for a curriculum in evidence-based medicine (EBM). This MR-EBM comprises 10 elements of knowledge, skills, and abilities (KSAs) representing our institutional learning goals of career-spanning engagement with EBM. An MR also includes developmental trajectories for each KSA, beginning with medical school coursework, including residency training, and outlining the qualifications of individuals to teach and mentor in EBM. The development was not part of the validation effort, as our curriculum is focused at a single stage (undergraduate medical students). An MR comprises the desired KSAs for an entire curriculum, together with descriptions of a learner's performance and/or capabilities as they develop from novice to proficiency of the curricular target(s). The MR construct is intended to support curriculum development or refinement by capturing the KSAs that support the articulation of concrete learning goals; it also promotes assessment that demonstrates development in the target KSAs and encourages reflection and self-directed learning throughout the learner's career. Two other MRs have been published, and this is the first one specific to teaching and learning in medicine; this is also the first one created specifically to evaluate an existing curriculum. To validate the dispersion of the elements of the EBM curriculum, the nine clinical instructors in the EBM two-course curriculum completed an MDS exercise, rating the similarities of the 10 curricular elements. MDS is a mathematical approach to understanding relationships among concepts/objects when these relationships are difficult to quantify. Eliciting similarity ratings biased the responses toward the null hypothesis (that the elements are not different). MDS results suggested that the MR represents 10 different, although related, facets of the construct

  9. Development and initial validation of primary care provider mental illness management and team-based care self-efficacy scales.

    PubMed

    Loeb, Danielle F; Crane, Lori A; Leister, Erin; Bayliss, Elizabeth A; Ludman, Evette; Binswanger, Ingrid A; Kline, Danielle M; Smith, Meredith; deGruy, Frank V; Nease, Donald E; Dickinson, L Miriam

    Develop and validate self-efficacy scales for primary care provider (PCP) mental illness management and team-based care participation. We developed three self-efficacy scales: team-based care (TBC), mental illness management (MIM), and chronic medical illness (CMI). We developed the scales using Bandura's Social Cognitive Theory as a guide. The survey instrument included items from previously validated scales on team-based care and mental illness management. We administered a mail survey to 900 randomly selected Colorado physicians. We conducted exploratory principal factor analysis with oblique rotation. We constructed self-efficacy scales and calculated standardized Cronbach's alpha coefficients to test internal consistency. We calculated correlation coefficients between the MIM and TBC scales and previously validated measures related to each scale to evaluate convergent validity. We tested correlations between the TBC and the measures expected to correlate with the MIM scale and vice versa to evaluate discriminant validity. PCPs (n=402, response rate=49%) from diverse practice settings completed surveys. Items grouped into factors as expected. Cronbach's alphas were 0.94, 0.88, and 0.83 for TBC, MIM, and CMI scales respectively. In convergent validity testing, the TBC scale was correlated as predicted with scales assessing communications strategies, attitudes toward teams, and other teamwork indicators (r=0.25 to 0.40, all statistically significant). Likewise, the MIM scale was significantly correlated with several items about knowledge and experience managing mental illness (r=0.24 to 41, all statistically significant). As expected in discriminant validity testing, the TBC scale had only very weak correlations with the mental illness knowledge and experience managing mental illness items (r=0.03 to 0.12). Likewise, the MIM scale was only weakly correlated with measures of team-based care (r=0.09 to.17). This validation study of MIM and TBC self-efficacy scales

  10. Knowledge Based Systems (KBS) Verification, Validation, Evaluation, and Testing (VVE&T) Bibliography: Topical Categorization

    DTIC Science & Technology

    2003-03-01

    Different?," Jour. of Experimental & Theoretical Artificial Intelligence, Special Issue on Al for Systems Validation and Verification, 12(4), 2000, pp...Hamilton, D., " Experiences in Improving the State of Practice in Verification and Validation of Knowledge-Based Systems," Workshop Notes of the AAAI...Unsuspected Power of the Standard Turing Test," Jour. of Experimental & Theoretical Artificial Intelligence., 12, 2000, pp3 3 1-3 4 0 . [30] Gaschnig

  11. AIRS Retrieval Validation During the EAQUATE

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Smith, William L.; Cuomo, Vincenzo; Taylor, Jonathan P.; Barnet, Christopher D.; DiGirolamo, Paolo; Pappalardo, Gelsomina; Larar, Allen M.; Liu, Xu; Newman, Stuart M.

    2006-01-01

    Atmospheric and surface thermodynamic parameters retrieved with advanced hyperspectral remote sensors of Earth observing satellites are critical for weather prediction and scientific research. The retrieval algorithms and retrieved parameters from satellite sounders must be validated to demonstrate the capability and accuracy of both observation and data processing systems. The European AQUA Thermodynamic Experiment (EAQUATE) was conducted mainly for validation of the Atmospheric InfraRed Sounder (AIRS) on the AQUA satellite, but also for assessment of validation systems of both ground-based and aircraft-based instruments which will be used for other satellite systems such as the Infrared Atmospheric Sounding Interferometer (IASI) on the European MetOp satellite, the Cross-track Infrared Sounder (CrIS) from the NPOESS Preparatory Project and the following NPOESS series of satellites. Detailed inter-comparisons were conducted and presented using different retrieval methodologies: measurements from airborne ultraspectral Fourier transform spectrometers, aircraft in-situ instruments, dedicated dropsondes and radiosondes, and ground based Raman Lidar, as well as from the European Center for Medium range Weather Forecasting (ECMWF) modeled thermal structures. The results of this study not only illustrate the quality of the measurements and retrieval products but also demonstrate the capability of these validation systems which are put in place to validate current and future hyperspectral sounding instruments and their scientific products.

  12. MO-C-17A-03: A GPU-Based Method for Validating Deformable Image Registration in Head and Neck Radiotherapy Using Biomechanical Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neylon, J; Min, Y; Qi, S

    2014-06-15

    Purpose: Deformable image registration (DIR) plays a pivotal role in head and neck adaptive radiotherapy but a systematic validation of DIR algorithms has been limited by a lack of quantitative high-resolution groundtruth. We address this limitation by developing a GPU-based framework that provides a systematic DIR validation by generating (a) model-guided synthetic CTs representing posture and physiological changes, and (b) model-guided landmark-based validation. Method: The GPU-based framework was developed to generate massive mass-spring biomechanical models from patient simulation CTs and contoured structures. The biomechanical model represented soft tissue deformations for known rigid skeletal motion. Posture changes were simulated by articulatingmore » skeletal anatomy, which subsequently applied elastic corrective forces upon the soft tissue. Physiological changes such as tumor regression and weight loss were simulated in a biomechanically precise manner. Synthetic CT data was then generated from the deformed anatomy. The initial and final positions for one hundred randomly-chosen mass elements inside each of the internal contoured structures were recorded as ground truth data. The process was automated to create 45 synthetic CT datasets for a given patient CT. For instance, the head rotation was varied between +/− 4 degrees along each axis, and tumor volumes were systematically reduced up to 30%. Finally, the original CT and deformed synthetic CT were registered using an optical flow based DIR. Results: Each synthetic data creation took approximately 28 seconds of computation time. The number of landmarks per data set varied between two and three thousand. The validation method is able to perform sub-voxel analysis of the DIR, and report the results by structure, giving a much more in depth investigation of the error. Conclusions: We presented a GPU based high-resolution biomechanical head and neck model to validate DIR algorithms by generating CT

  13. Validation of a photography-based goniometry method for measuring joint range of motion.

    PubMed

    Blonna, Davide; Zarkadas, Peter C; Fitzsimmons, James S; O'Driscoll, Shawn W

    2012-01-01

    A critical component of evaluating the outcomes after surgery to restore lost elbow motion is the range of motion (ROM) of the elbow. This study examined if digital photography-based goniometry is as accurate and reliable as clinical goniometry for measuring elbow ROM. Instrument validity and reliability for photography-based goniometry were evaluated for a consecutive series of 50 elbow contractures by 4 observers with different levels of elbow experience. Goniometric ROM measurements were taken with the elbows in full extension and full flexion directly in the clinic (once) and from digital photographs (twice in a blinded random manner). Instrument validity for photography-based goniometry was extremely high (intraclass correlation coefficient: extension = 0.98, flexion = 0.96). For extension and flexion measurements by the expert surgeon, systematic error was negligible (0° and 1°, respectively). Limits of agreement were 7° (95% confidence interval [CI], 5° to 9°) and -7° (95% CI, -5° to -9°) for extension and 8° (95% CI, 6° to 10°) and -7° (95% CI, -5° to -9°) for flexion. Interobserver reliability for photography-based goniometry was better than that for clinical goniometry. The least experienced observer's photographic goniometry measurements were closer to the reference measurements than the clinical goniometry measurements. Photography-based goniometry is accurate and reliable for measuring elbow ROM. The photography-based method relied less on observer expertise than clinical goniometry. This validates an objective measure of patient outcome without requiring doctor-patient contact at a tertiary care center, where most contracture surgeries are done. Copyright © 2012 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.

  14. Validity of a multipass, web-based, 24-hour self-administered recall for assessment of total energy intake in blacks and whites.

    PubMed

    Arab, Lenore; Tseng, Chi-Hong; Ang, Alfonso; Jardack, Patricia

    2011-12-01

    To date, Web-based 24-hour recalls have not been validated using objective biomarkers. From 2006 to 2009, the validity of 6 Web-based DietDay 24-hour recalls was tested among 115 black and 118 white healthy adults from Los Angeles, California, by using the doubly labeled water method, and the results were compared with the results of the Diet History Questionnaire, a food frequency questionnaire developed by the National Cancer Institute. The authors performed repeated measurements in a subset of 53 subjects approximately 6 months later to estimate the stability of the doubly labeled water measurement. The attenuation factors for the DietDay recall were 0.30 for blacks and 0.26 for whites. For the Diet History Questionnaire, the attenuation factors were 0.15 and 0.17 for blacks and whites, respectively. Adjusted correlations between true energy intake and the recalls were 0.50 and 0.47 for blacks and whites, respectively, for the DietDay recall. For the Diet History Questionnaire, they were 0.34 and 0.36 for blacks and whites, respectively. The rate of underreporting of more than 30% of calories was lower with the recalls than with the questionnaire (25% and 41% vs. 34% and 52% for blacks and whites, respectively). These findings suggest that Web-based DietDay dietary recalls offer an inexpensive and widely accessible dietary assessment alternative, the validity of which is equally strong among black and white adults. The validity of the Web-administered recall was superior to that of the paper food frequency questionnaire.

  15. Gene network biological validity based on gene-gene interaction relevance.

    PubMed

    Gómez-Vela, Francisco; Díaz-Díaz, Norberto

    2014-01-01

    In recent years, gene networks have become one of the most useful tools for modeling biological processes. Many inference gene network algorithms have been developed as techniques for extracting knowledge from gene expression data. Ensuring the reliability of the inferred gene relationships is a crucial task in any study in order to prove that the algorithms used are precise. Usually, this validation process can be carried out using prior biological knowledge. The metabolic pathways stored in KEGG are one of the most widely used knowledgeable sources for analyzing relationships between genes. This paper introduces a new methodology, GeneNetVal, to assess the biological validity of gene networks based on the relevance of the gene-gene interactions stored in KEGG metabolic pathways. Hence, a complete KEGG pathway conversion into a gene association network and a new matching distance based on gene-gene interaction relevance are proposed. The performance of GeneNetVal was established with three different experiments. Firstly, our proposal is tested in a comparative ROC analysis. Secondly, a randomness study is presented to show the behavior of GeneNetVal when the noise is increased in the input network. Finally, the ability of GeneNetVal to detect biological functionality of the network is shown.

  16. Web-based data acquisition and management system for GOSAT validation Lidar data analysis

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Takubo, Shoichiro; Kawasaki, Takeru; Abdullah, Indra N.; Uchino, Osamu; Morino, Isamu; Yokota, Tatsuya; Nagai, Tomohiro; Sakai, Tetsu; Maki, Takashi; Arai, Kohei

    2012-11-01

    An web-base data acquisition and management system for GOSAT (Greenhouse gases Observation SATellite) validation lidar data analysis is developed. The system consists of data acquisition sub-system (DAS) and data management sub-system (DMS). DAS written in Perl language acquires AMeDAS ground-level meteorological data, Rawinsonde upper-air meteorological data, ground-level oxidant data, skyradiometer data, skyview camera images, meteorological satellite IR image data and GOSAT validation lidar data. DMS written in PHP language demonstrates satellite-pass date and all acquired data.

  17. Responsiveness and predictive validity of the tablet-based symbol digit modalities test in patients with stroke.

    PubMed

    Hsiao, Pei-Chi; Yu, Wan-Hui; Lee, Shih-Chieh; Chen, Mei-Hsiang; Hsieh, Ching-Lin

    2018-06-14

    The responsiveness and predictive validity of the Tablet-based Symbol Digit Modalities Test (T-SDMT) are unknown, which limits the utility of the T-SDMT in both clinical and research settings. The purpose of this study was to examine the responsiveness and predictive validity of the T-SDMT in inpatients with stroke. A follow-up, repeated-assessments design. One rehabilitation unit at a local medical center. A total of 50 inpatients receiving rehabilitation completed T-SDMT assessments at admission to and discharge from a rehabilitation ward. The median follow-up period was 14 days. The Barthel index (BI) was assessed at discharge and was used as the criterion of the predictive validity. The mean changes in the T-SDMT scores between admission and discharge were statistically significant (paired t-test = 3.46, p = 0.001). The T-SDMT scores showed a nearly moderate standardized response mean (0.49). A moderate association (Pearson's r = 0.47) was found between the scores of the T-SDMT at admission and those of the BI at discharge, indicating good predictive validity of the T-SDMT. Our results support the responsiveness and predictive validity of the T-SDMT in patients with stroke receiving rehabilitation in hospitals. This study provides empirical evidence supporting the use of the T-SDMT as an outcome measure for assessing processingspeed in inpatients with stroke. The scores of the T-SDMT could be used to predict basic activities of daily living function in inpatients with stroke.

  18. Applicability of Monte Carlo cross validation technique for model development and validation using generalised least squares regression

    NASA Astrophysics Data System (ADS)

    Haddad, Khaled; Rahman, Ataur; A Zaman, Mohammad; Shrestha, Surendra

    2013-03-01

    SummaryIn regional hydrologic regression analysis, model selection and validation are regarded as important steps. Here, the model selection is usually based on some measurements of goodness-of-fit between the model prediction and observed data. In Regional Flood Frequency Analysis (RFFA), leave-one-out (LOO) validation or a fixed percentage leave out validation (e.g., 10%) is commonly adopted to assess the predictive ability of regression-based prediction equations. This paper develops a Monte Carlo Cross Validation (MCCV) technique (which has widely been adopted in Chemometrics and Econometrics) in RFFA using Generalised Least Squares Regression (GLSR) and compares it with the most commonly adopted LOO validation approach. The study uses simulated and regional flood data from the state of New South Wales in Australia. It is found that when developing hydrologic regression models, application of the MCCV is likely to result in a more parsimonious model than the LOO. It has also been found that the MCCV can provide a more realistic estimate of a model's predictive ability when compared with the LOO.

  19. Optical dosimetry probes to validate Monte Carlo and empirical-method-based NIR dose planning in the brain.

    PubMed

    Verleker, Akshay Prabhu; Shaffer, Michael; Fang, Qianqian; Choi, Mi-Ran; Clare, Susan; Stantz, Keith M

    2016-12-01

    A three-dimensional photon dosimetry in tissues is critical in designing optical therapeutic protocols to trigger light-activated drug release. The objective of this study is to investigate the feasibility of a Monte Carlo-based optical therapy planning software by developing dosimetry tools to characterize and cross-validate the local photon fluence in brain tissue, as part of a long-term strategy to quantify the effects of photoactivated drug release in brain tumors. An existing GPU-based 3D Monte Carlo (MC) code was modified to simulate near-infrared photon transport with differing laser beam profiles within phantoms of skull bone (B), white matter (WM), and gray matter (GM). A novel titanium-based optical dosimetry probe with isotropic acceptance was used to validate the local photon fluence, and an empirical model of photon transport was developed to significantly decrease execution time for clinical application. Comparisons between the MC and the dosimetry probe measurements were on an average 11.27%, 13.25%, and 11.81% along the illumination beam axis, and 9.4%, 12.06%, 8.91% perpendicular to the beam axis for WM, GM, and B phantoms, respectively. For a heterogeneous head phantom, the measured % errors were 17.71% and 18.04% along and perpendicular to beam axis. The empirical algorithm was validated by probe measurements and matched the MC results (R20.99), with average % error of 10.1%, 45.2%, and 22.1% relative to probe measurements, and 22.6%, 35.8%, and 21.9% relative to the MC, for WM, GM, and B phantoms, respectively. The simulation time for the empirical model was 6 s versus 8 h for the GPU-based Monte Carlo for a head phantom simulation. These tools provide the capability to develop and optimize treatment plans for optimal release of pharmaceuticals in the treatment of cancer. Future work will test and validate these novel delivery and release mechanisms in vivo.

  20. Validating GPM-based Multi-satellite IMERG Products Over South Korea

    NASA Astrophysics Data System (ADS)

    Wang, J.; Petersen, W. A.; Wolff, D. B.; Ryu, G. H.

    2017-12-01

    Accurate precipitation estimates derived from space-borne satellite measurements are critical for a wide variety of applications such as water budget studies, and prevention or mitigation of natural hazards caused by extreme precipitation events. This study validates the near-real-time Early Run, Late Run and the research-quality Final Run Integrated Multi-Satellite Retrievals for GPM (IMERG) using Korean Quantitative Precipitation Estimation (QPE). The Korean QPE data are at a 1-hour temporal resolution and 1-km by 1-km spatial resolution, and were developed by Korea Meteorological Administration (KMA) from a Real-time ADjusted Radar-AWS (Automatic Weather Station) Rainrate (RAD-RAR) system utilizing eleven radars over the Republic of Korea. The validation is conducted by comparing Version-04A IMERG (Early, Late and Final Runs) with Korean QPE over the area (124.5E-130.5E, 32.5N-39N) at various spatial and temporal scales during March 2014 through November 2016. The comparisons demonstrate the reasonably good ability of Version-04A IMERG products in estimating precipitation over South Korea's complex topography that consists mainly of hills and mountains, as well as large coastal plains. Based on this data, the Early Run, Late Run and Final Run IMERG precipitation estimates higher than 0.1mm h-1 are about 20.1%, 7.5% and 6.1% higher than Korean QPE at 0.1o and 1-hour resolutions. Detailed comparison results are available at https://wallops-prf.gsfc.nasa.gov/KoreanQPE.V04/index.html

  1. Urdu translation of the Hamilton Rating Scale for Depression: Results of a validation study

    PubMed Central

    Hashmi, Ali M.; Naz, Shahana; Asif, Aftab; Khawaja, Imran S.

    2016-01-01

    Objective: To develop a standardized validated version of the Hamilton Rating Scale for Depression (HAM-D) in Urdu. Methods: After translation of the HAM-D into the Urdu language following standard guidelines, the final Urdu version (HAM-D-U) was administered to 160 depressed outpatients. Inter-item correlation was assessed by calculating Cronbach alpha. Correlation between HAM-D-U scores at baseline and after a 2-week interval was evaluated for test-retest reliability. Moreover, scores of two clinicians on HAM-D-U were compared for inter-rater reliability. For establishing concurrent validity, scores of HAM-D-U and BDI-U were compared by using Spearman correlation coefficient. The study was conducted at Mayo Hospital, Lahore, from May to December 2014. Results: The Cronbach alpha for HAM-D-U was 0.71. Composite scores for HAM-D-U at baseline and after a 2-week interval were also highly correlated with each other (Spearman correlation coefficient 0.83, p-value < 0.01) indicating good test-retest reliability. Composite scores for HAM-D-U and BDI-U were positively correlated with each other (Spearman correlation coefficient 0.85, p < 0.01) indicating good concurrent validity. Scores of two clinicians for HAM-D-U were also positively correlated (Spearman correlation coefficient 0.82, p-value < 0.01) indicated good inter-rater reliability. Conclusion: The HAM-D-U is a valid and reliable instrument for the assessment of Depression. It shows good inter-rater and test-retest reliability. The HAM-D-U can be a tool either for clinical management or research. PMID:28083049

  2. An extended model for ultrasonic-based enhanced oil recovery with experimental validation.

    PubMed

    Mohsin, Mohammed; Meribout, Mahmoud

    2015-03-01

    This paper suggests a new ultrasonic-based enhanced oil recovery (EOR) model for application in oil field reservoirs. The model is modular and consists of an acoustic module and a heat transfer module, where the heat distribution is updated when the temperature rise exceeds 1 °C. The model also considers the main EOR parameters which includes both the geophysical (i.e., porosity, permeability, temperature rise, and fluid viscosity) and acoustical (e.g., acoustic penetration and pressure distribution in various fluids and mediums) properties of the wells. Extended experiments were performed using powerful ultrasonic waves which were applied for different kind of oils & oil saturated core samples. The corresponding results showed a good matching with those obtained from simulations, validating the suggested model to some extent. Hence, a good recovery rate of around 88.2% of original oil in place (OOIP) was obtained after 30 min of continuous generation of ultrasonic waves. This leads to consider the ultrasonic-based EOR as another tangible solution for EOR. This claim is supported further by considering several injection wells where the simulation results indicate that with four (4) injection wells; the recovery rate may increase up-to 96.7% of OOIP. This leads to claim the high potential of ultrasonic-based EOR as compared to the conventional methods. Following this study, the paper also proposes a large scale ultrasonic-based EOR hardware system for installation in oil fields. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Web-based questionnaires to assess perinatal outcome proved to be valid.

    PubMed

    van Gelder, Marleen M H J; Vorstenbosch, Saskia; Derks, Lineke; Te Winkel, Bernke; van Puijenbroek, Eugène P; Roeleveld, Nel

    2017-10-01

    The objective of this study was to validate a Web-based questionnaire completed by the mother to assess perinatal outcome used in a prospective cohort study. For 882 women with an estimated date of delivery between February 2012 and February 2015 who participated in the PRegnancy and Infant DEvelopment (PRIDE) Study, we compared data on pregnancy outcome, including mode of delivery, plurality, gestational age, birth weight and length, head circumference, birth defects, and infant sex, from Web-based questionnaires administered to the mothers 2 months after delivery with data from obstetric records. For continuous variables, we calculated intraclass correlation coefficients (ICCs) with 95% confidence intervals (CIs), whereas sensitivity and specificity were determined for categorical variables. We observed only very small differences between the two methods of data collection for gestational age (ICC, 0.91; 95% CI, 0.90-0.92), birth weight (ICC, 0.96; 95% CI, 0.95-0.96), birth length (ICC, 0.90; 95% CI, 0.87-0.92), and head circumference (ICC, 0.88; 95% CI, 0.80-0.93). Agreement between the Web-based questionnaire and obstetric records was high as well, with sensitivity ranging between 0.86 (termination of pregnancy) and 1.00 (four outcomes) and specificity between 0.96 (term birth) and 1.00 (nine outcomes). Our study provides evidence that Web-based questionnaires could be considered as a valid complementary or alternative method of data collection. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Validated spectrophotometric methods for determination of sodium valproate based on charge transfer complexation reactions.

    PubMed

    Belal, Tarek S; El-Kafrawy, Dina S; Mahrous, Mohamed S; Abdel-Khalek, Magdi M; Abo-Gharam, Amira H

    2016-02-15

    This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415nm. The fourth method involves the formation of a yellow complex peaking at 361nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Validated spectrophotometric methods for determination of sodium valproate based on charge transfer complexation reactions

    NASA Astrophysics Data System (ADS)

    Belal, Tarek S.; El-Kafrawy, Dina S.; Mahrous, Mohamed S.; Abdel-Khalek, Magdi M.; Abo-Gharam, Amira H.

    2016-02-01

    This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524 nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490 nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415 nm. The fourth method involves the formation of a yellow complex peaking at 361 nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8 μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method.

  6. Validation of a Smartphone Image-Based Dietary Assessment Method for Pregnant Women

    PubMed Central

    Ashman, Amy M.; Collins, Clare E.; Brown, Leanne J.; Rae, Kym M.; Rollo, Megan E.

    2017-01-01

    Image-based dietary records could lower participant burden associated with traditional prospective methods of dietary assessment. They have been used in children, adolescents and adults, but have not been evaluated in pregnant women. The current study evaluated relative validity of the DietBytes image-based dietary assessment method for assessing energy and nutrient intakes. Pregnant women collected image-based dietary records (via a smartphone application) of all food, drinks and supplements consumed over three non-consecutive days. Intakes from the image-based method were compared to intakes collected from three 24-h recalls, taken on random days; once per week, in the weeks following the image-based record. Data were analyzed using nutrient analysis software. Agreement between methods was ascertained using Pearson correlations and Bland-Altman plots. Twenty-five women (27 recruited, one withdrew, one incomplete), median age 29 years, 15 primiparas, eight Aboriginal Australians, completed image-based records for analysis. Significant correlations between the two methods were observed for energy, macronutrients and fiber (r = 0.58–0.84, all p < 0.05), and for micronutrients both including (r = 0.47–0.94, all p < 0.05) and excluding (r = 0.40–0.85, all p < 0.05) supplements in the analysis. Bland-Altman plots confirmed acceptable agreement with no systematic bias. The DietBytes method demonstrated acceptable relative validity for assessment of nutrient intakes of pregnant women. PMID:28106758

  7. Web-based Food Behaviour Questionnaire: validation with grades six to eight students.

    PubMed

    Hanning, Rhona M; Royall, Dawna; Toews, Jenn E; Blashill, Lindsay; Wegener, Jessica; Driezen, Pete

    2009-01-01

    The web-based Food Behaviour Questionnaire (FBQ) includes a 24-hour diet recall, a food frequency questionnaire, and questions addressing knowledge, attitudes, intentions, and food-related behaviours. The survey has been revised since it was developed and initially validated. The current study was designed to obtain qualitative feedback and to validate the FBQ diet recall. "Think aloud" techniques were used in cognitive interviews with dietitian experts (n=11) and grade six students (n=21). Multi-ethnic students (n=201) in grades six to eight at urban southern Ontario schools completed the FBQ and, subsequently, one-on-one diet recall interviews with trained dietitians. Food group and nutrient intakes were compared. Users provided positive feedback on the FBQ. Suggestions included adding more foods, more photos for portion estimation, and online student feedback. Energy and nutrient intakes were positively correlated between FBQ and dietitian interviews, overall and by gender and grade (all p<0.001). Intraclass correlation coefficients were ≥0.5 for energy and macro-nutrients, although the web-based survey underestimated energy (10.5%) and carbohydrate (-15.6%) intakes (p<0.05). Under-estimation of rice and pasta portions on the web accounted for 50% of this discrepancy. The FBQ is valid, relative to 24-hour recall interviews, for dietary assessment in diverse populations of Ontario children in grades six to eight.

  8. The validation of a human force model to predict dynamic forces resulting from multi-joint motions

    NASA Technical Reports Server (NTRS)

    Pandya, Abhilash K.; Maida, James C.; Aldridge, Ann M.; Hasson, Scott M.; Woolford, Barbara J.

    1992-01-01

    The development and validation is examined of a dynamic strength model for humans. This model is based on empirical data. The shoulder, elbow, and wrist joints were characterized in terms of maximum isolated torque, or position and velocity, in all rotational planes. This data was reduced by a least squares regression technique into a table of single variable second degree polynomial equations determining torque as a function of position and velocity. The isolated joint torque equations were then used to compute forces resulting from a composite motion, in this case, a ratchet wrench push and pull operation. A comparison of the predicted results of the model with the actual measured values for the composite motion indicates that forces derived from a composite motion of joints (ratcheting) can be predicted from isolated joint measures. Calculated T values comparing model versus measured values for 14 subjects were well within the statistically acceptable limits and regression analysis revealed coefficient of variation between actual and measured to be within 0.72 and 0.80.

  9. Validation of a skinfold based index for tracking proportional changes in lean mass

    PubMed Central

    Slater, G J; Duthie, G M; Pyne, D B; Hopkins, W G

    2006-01-01

    Background The lean mass index (LMI) is a new empirical measure that tracks within‐subject proportional changes in body mass adjusted for changes in skinfold thickness. Objective To compare the ability of the LMI and other skinfold derived measures of lean mass to monitor changes in lean mass. Methods 20 elite rugby union players undertook full anthropometric profiles on two occasions 10 weeks apart to calculate the LMI and five skinfold based measures of lean mass. Hydrodensitometry, deuterium dilution, and dual energy x ray absorptiometry provided a criterion choice, four compartment (4C) measure of lean mass for validation purposes. Regression based measures of validity, derived for within‐subject proportional changes through log transformation, included correlation coefficients and standard errors of the estimate. Results The correlation between change scores for the LMI and 4C lean mass was moderate (0.37, 90% confidence interval −0.01 to 0.66) and similar to the correlations for the other practical measures of lean mass (range 0.26 to 0.42). Standard errors of the estimate for the practical measures were in the range of 2.8–2.9%. The LMI correctly identified the direction of change in 4C lean mass for 14 of the 20 athletes, compared with 11 to 13 for the other practical measures of lean mass. Conclusions The LMI is probably as good as other skinfold based measures for tracking lean mass and is theoretically more appropriate. Given the impracticality of the 4C criterion measure for routine field use, the LMI may offer a convenient alternative for monitoring physique changes, provided its utility is established under various conditions. PMID:16505075

  10. LIVVkit: An extensible, python-based, land ice verification and validation toolkit for ice sheet models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.

    To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptopsmore » to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Furthermore, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.« less

  11. LIVVkit: An extensible, python-based, land ice verification and validation toolkit for ice sheet models

    DOE PAGES

    Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.; ...

    2017-03-23

    To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptopsmore » to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Furthermore, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.« less

  12. Motion-base simulator results of advanced supersonic transport handling qualities with active controls

    NASA Technical Reports Server (NTRS)

    Feather, J. B.; Joshi, D. S.

    1981-01-01

    Handling qualities of the unaugmented advanced supersonic transport (AST) are deficient in the low-speed, landing approach regime. Consequently, improvement in handling with active control augmentation systems has been achieved using implicit model-following techniques. Extensive fixed-based simulator evaluations were used to validate these systems prior to tests with full motion and visual capabilities on a six-axis motion-base simulator (MBS). These tests compared the handling qualities of the unaugmented AST with several augmented configurations to ascertain the effectiveness of these systems. Cooper-Harper ratings, tracking errors, and control activity data from the MBS tests have been analyzed statistically. The results show the fully augmented AST handling qualities have been improved to an acceptable level.

  13. Killed by Police: Validity of Media-Based Data and Misclassification of Death Certificates in Massachusetts, 2004-2016.

    PubMed

    Feldman, Justin M; Gruskin, Sofia; Coull, Brent A; Krieger, Nancy

    2017-10-01

    To assess the validity of demographic data reported in news media-based data sets for persons killed by police in Massachusetts (2004-2016) and to evaluate misclassification of these deaths in vital statistics mortality data. We identified 84 deaths resulting from police intervention in 4 news media-based data sources (WGBH News, Fatal Encounters, The Guardian, and The Washington Post) and, via record linkage, conducted matched-pair analyses with the Massachusetts mortality data. Compared with death certificates, there was near-perfect correlation for age in all sources (Pearson r > 0.99) and perfect concordance for gender. Agreement for race/ethnicity ranged from perfect (The Counted and The Washington Post) to high (Fatal Encounters Cohen's κ = 0.92). Among the 78 decedents for whom finalized International Classification of Diseases, 10th Revision (ICD-10), codes were available, 59 (75.6%) were properly classified as "deaths due to legal intervention." In Massachusetts, the 4 media-based sources on persons killed by police provide valid demographic data. Misclassification of deaths due to legal intervention in the mortality data does, however, remain a problem. Replication of the study in other states and nationally is warranted.

  14. A Compact Forearm Crutch Based on Force Sensors for Aided Gait: Reliability and Validity.

    PubMed

    Chamorro-Moriana, Gema; Sevillano, José Luis; Ridao-Fernández, Carmen

    2016-06-21

    Frequently, patients who suffer injuries in some lower member require forearm crutches in order to partially unload weight-bearing. These lesions cause pain in lower limb unloading and their progression should be controlled objectively to avoid significant errors in accuracy and, consequently, complications and after effects in lesions. The design of a new and feasible tool that allows us to control and improve the accuracy of loads exerted on crutches during aided gait is necessary, so as to unburden the lower limbs. In this paper, we describe such a system based on a force sensor, which we have named the GCH System 2.0. Furthermore, we determine the validity and reliability of measurements obtained using this tool via a comparison with the validated AMTI (Advanced Mechanical Technology, Inc., Watertown, MA, USA) OR6-7-2000 Platform. An intra-class correlation coefficient demonstrated excellent agreement between the AMTI Platform and the GCH System. A regression line to determine the predictive ability of the GCH system towards the AMTI Platform was found, which obtained a precision of 99.3%. A detailed statistical analysis is presented for all the measurements and also segregated for several requested loads on the crutches (10%, 25% and 50% of body weight). Our results show that our system, designed for assessing loads exerted by patients on forearm crutches during assisted gait, provides valid and reliable measurements of loads.

  15. Language-based Measures of Mindfulness: Initial Validity and Clinical Utility

    PubMed Central

    Collins, Susan E.; Chawla, Neharika; Hsu, Sharon H.; Grow, Joel; Otto, Jacqueline M.; Marlatt, G. Alan

    2009-01-01

    This study examined relationships among language use, mindfulness, and substance-use treatment outcomes in the context of an efficacy trial of mindfulness-based relapse prevention (MBRP) for adults with alcohol and other drug use (AOD) disorders (see Bowen, Chawla, Collins et al., in press). An expert panel generated two categories of mindfulness language (ML) describing the mindfulness state and the more encompassing “mindfulness journey,” which included words describing challenges of developing a mindfulness practice. MBRP participants (n=48) completed baseline sociodemographic and AOD measures, and participated in the 8-week MBRP program. AOD data were collected during the 4-month follow-up. A word count program assessed the frequency of ML and other linguistic markers in participants’ responses to open-ended questions about their postintervention impressions of mindfulness practice and MBRP. Findings supported concurrent validity of ML categories: ML words appeared more frequently in the MBRP manual compared to the 12-step Big Book. Further, ML categories correlated with other linguistic variables related to the mindfulness construct. Finally, predictive validity was supported: greater use of ML predicted fewer AOD use days during the 4-month follow-up. This study provided initial support for ML as a valid, clinically useful mindfulness measure. If future studies replicate these findings, ML could be used in conjunction with self-report to provide a more complete picture of the mindfulness experience. PMID:20025383

  16. Concurrent Validity and Classification Accuracy of Curriculum-Based Measurement for Written Expression

    ERIC Educational Resources Information Center

    Furey, William M.; Marcotte, Amanda M.; Hintze, John M.; Shackett, Caroline M.

    2016-01-01

    The study presents a critical analysis of written expression curriculum-based measurement (WE-CBM) metrics derived from 3- and 10-min test lengths. Criterion validity and classification accuracy were examined for Total Words Written (TWW), Correct Writing Sequences (CWS), Percent Correct Writing Sequences (%CWS), and Correct Minus Incorrect…

  17. Validation of satellite-based rainfall in Kalahari

    NASA Astrophysics Data System (ADS)

    Lekula, Moiteela; Lubczynski, Maciek W.; Shemang, Elisha M.; Verhoef, Wouter

    2018-06-01

    Water resources management in arid and semi-arid areas is hampered by insufficient rainfall data, typically obtained from sparsely distributed rain gauges. Satellite-based rainfall estimates (SREs) are alternative sources of such data in these areas. In this study, daily rainfall estimates from FEWS-RFE∼11 km, TRMM-3B42∼27 km, CMOPRH∼27 km and CMORPH∼8 km were evaluated against nine, daily rain gauge records in Central Kalahari Basin (CKB), over a five-year period, 01/01/2001-31/12/2005. The aims were to evaluate the daily rainfall detection capabilities of the four SRE algorithms, analyze the spatio-temporal variability of rainfall in the CKB and perform bias-correction of the four SREs. Evaluation methods included scatter plot analysis, descriptive statistics, categorical statistics and bias decomposition. The spatio-temporal variability of rainfall, was assessed using the SREs' mean annual rainfall, standard deviation, coefficient of variation and spatial correlation functions. Bias correction of the four SREs was conducted using a Time-Varying Space-Fixed bias-correction scheme. The results underlined the importance of validating daily SREs, as they had different rainfall detection capabilities in the CKB. The FEWS-RFE∼11 km performed best, providing better results of descriptive and categorical statistics than the other three SREs, although bias decomposition showed that all SREs underestimated rainfall. The analysis showed that the most reliable SREs performance analysis indicator were the frequency of "miss" rainfall events and the "miss-bias", as they directly indicated SREs' sensitivity and bias of rainfall detection, respectively. The Time Varying and Space Fixed (TVSF) bias-correction scheme, improved some error measures but resulted in the reduction of the spatial correlation distance, thus increased, already high, spatial rainfall variability of all the four SREs. This study highlighted SREs as valuable source of daily rainfall data providing

  18. Validity of the Multidimensional Fatigue Symptom Inventory-Short Form in an African American Community-Based Sample

    PubMed Central

    Asvat, Yasmin; Malcarne, Vanessa L.; Sadler, Georgia R.; Jacobsen, Paul B.

    2014-01-01

    Objectives This study examined the psychometric properties of the Multidimensional Fatigue Symptom Inventory-Short Form (MFSI-SF) in a community-based sample of African Americans. Design. A sample of 340 African Americans (116 men, 224 women) ranging in age from 18–81 years were recruited from the community (e.g., churches, health fairs, beauty salons). Participants completed a brief demographic survey, the MFSI-SF and the Positive and Negative Affect Schedule. Results The structural validity of the MFSI-SF for a community-based sample of African Americans was not supported. The five dimensions of fatigue (General, Emotional, Physical, Mental, Vigor) found for Whites in prior research were not found for African Americans in this study. Instead, fatigue, while multidimensional for African Americans, was best represented by a unique four-four profile in which general and emotional fatigue are collapsed into a single dimension and physical fatigue, mental fatigue, and vigor are relatively distinct. Hence, in the absence of modifications, the MFSI-SF cannot be considered to be structurally invariant across ethnic groups. A modified four-factor version of the MFSI-SF exhibited excellent internal consistency reliability and evidence supports its convergent validity. Using the modified four-factor version, gender and age were not meaningfully associated with MFSI-SF scores. Conclusion Future research should further examine whether modifications to the MFSI-SF would, as the findings suggest, improve its validity as a measure of multidimensional fatigue in African Americans. PMID:24527980

  19. Validation of simulated earthquake ground motions based on evolution of intensity and frequency content

    USGS Publications Warehouse

    Rezaeian, Sanaz; Zhong, Peng; Hartzell, Stephen; Zareian, Farzin

    2015-01-01

    Simulated earthquake ground motions can be used in many recent engineering applications that require time series as input excitations. However, applicability and validation of simulations are subjects of debate in the seismological and engineering communities. We propose a validation methodology at the waveform level and directly based on characteristics that are expected to influence most structural and geotechnical response parameters. In particular, three time-dependent validation metrics are used to evaluate the evolving intensity, frequency, and bandwidth of a waveform. These validation metrics capture nonstationarities in intensity and frequency content of waveforms, making them ideal to address nonlinear response of structural systems. A two-component error vector is proposed to quantify the average and shape differences between these validation metrics for a simulated and recorded ground-motion pair. Because these metrics are directly related to the waveform characteristics, they provide easily interpretable feedback to seismologists for modifying their ground-motion simulation models. To further simplify the use and interpretation of these metrics for engineers, it is shown how six scalar key parameters, including duration, intensity, and predominant frequency, can be extracted from the validation metrics. The proposed validation methodology is a step forward in paving the road for utilization of simulated ground motions in engineering practice and is demonstrated using examples of recorded and simulated ground motions from the 1994 Northridge, California, earthquake.

  20. Validating MEDIQUAL Constructs

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Gun; Min, Jae H.

    In this paper, we validate MEDIQUAL constructs through the different media users in help desk service. In previous research, only two end-users' constructs were used: assurance and responsiveness. In this paper, we extend MEDIQUAL constructs to include reliability, empathy, assurance, tangibles, and responsiveness, which are based on the SERVQUAL theory. The results suggest that: 1) five MEDIQUAL constructs are validated through the factor analysis. That is, importance of the constructs have relatively high correlations between measures of the same construct using different methods and low correlations between measures of the constructs that are expected to differ; and 2) five MEDIQUAL constructs are statistically significant on media users' satisfaction in help desk service by regression analysis.

  1. Face and Construct Validity of a Novel Virtual Reality-Based Bimanual Laparoscopic Force-Skills Trainer With Haptics Feedback.

    PubMed

    Prasad, Raghu; Muniyandi, Manivannan; Manoharan, Govindan; Chandramohan, Servarayan M

    2018-05-01

    The purpose of this study was to examine the face and construct validity of a custom-developed bimanual laparoscopic force-skills trainer with haptics feedback. The study also examined the effect of handedness on fundamental and complex tasks. Residents (n = 25) and surgeons (n = 25) performed virtual reality-based bimanual fundamental and complex tasks. Tool-tissue reaction forces were summed, recorded, and analysed. Seven different force-based measures and a 1-time measure were used as metrics. Subsequently, participants filled out face validity and demographic questionnaires. Residents and surgeons were positive on the design, workspace, and usefulness of the simulator. Construct validity results showed significant differences between residents and experts during the execution of fundamental and complex tasks. In both tasks, residents applied large forces with higher coefficient of variation and force jerks (P < .001). Experts, with their dominant hand, applied lower forces in complex tasks and higher forces in fundamental tasks (P < .001). The coefficients of force variation (CoV) of residents and experts were higher in complex tasks (P < .001). Strong correlations were observed between CoV and task time for fundamental (r = 0.70) and complex tasks (r = 0.85). Range of smoothness of force was higher for the non-dominant hand in both fundamental and complex tasks. The simulator was able to differentiate the force-skills of residents and surgeons, and objectively evaluate the effects of handedness on laparoscopic force-skills. Competency-based laparoscopic skills assessment curriculum should be updated to meet the requirements of bimanual force-based training.

  2. Validation of Remote Sensing Retrieval Products using Data from a Wireless Sensor-Based Online Monitoring in Antarctica

    PubMed Central

    Li, Xiuhong; Cheng, Xiao; Yang, Rongjin; Liu, Qiang; Qiu, Yubao; Zhang, Jialin; Cai, Erli; Zhao, Long

    2016-01-01

    Of the modern technologies in polar-region monitoring, the remote sensing technology that can instantaneously form large-scale images has become much more important in helping acquire parameters such as the freezing and melting of ice as well as the surface temperature, which can be used in the research of global climate change, Antarctic ice sheet responses, and cap formation and evolution. However, the acquirement of those parameters is impacted remarkably by the climate and satellite transit time which makes it almost impossible to have timely and continuous observation data. In this research, a wireless sensor-based online monitoring platform (WSOOP) for the extreme polar environment is applied to obtain a long-term series of data which is site-specific and continuous in time. Those data are compared and validated with the data from a weather station at Zhongshan Station Antarctica and the result shows an obvious correlation. Then those data are used to validate the remote sensing products of the freezing and melting of ice and the surface temperature and the result also indicated a similar correlation. The experiment in Antarctica has proven that WSOOP is an effective system to validate remotely sensed data in the polar region. PMID:27869668

  3. Accelerator-based validation of shielding codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeitlin, Cary; Heilbronn, Lawrence; Miller, Jack

    2002-08-12

    The space radiation environment poses risks to astronaut health from a diverse set of sources, ranging from low-energy protons and electrons to highly-charged, high-energy atomic nuclei and their associated fragmentation products, including neutrons. The low-energy protons and electrons are the source of most of the radiation dose to Shuttle and ISS crews, while the more energetic particles that comprise the Galactic Cosmic Radiation (protons, He, and heavier nuclei up to Fe) will be the dominant source for crews on long-duration missions outside the earth's magnetic field. Because of this diversity of sources, a broad ground-based experimental effort is required tomore » validate the transport and shielding calculations used to predict doses and dose-equivalents under various mission scenarios. The experimental program of the LBNL group, described here, focuses principally on measurements of charged particle and neutron production in high-energy heavy-ion fragmentation. Other aspects of the program include measurements of the shielding provided by candidate spacesuit materials against low-energy protons (particularly relevant to extra-vehicular activities in low-earth orbit), and the depth-dose relations in tissue for higher-energy protons. The heavy-ion experiments are performed at the Brookhaven National Laboratory's Alternating Gradient Synchrotron and the Heavy-Ion Medical Accelerator in Chiba in Japan. Proton experiments are performed at the Lawrence Berkeley National Laboratory's 88'' Cyclotron with a 55 MeV beam, and at the Loma Linda University Proton Facility with 100 to 250 MeV beam energies. The experimental results are an important component of the overall shielding program, as they allow for simple, well-controlled tests of the models developed to handle the more complex radiation environment in space.« less

  4. NNvPDB: Neural Network based Protein Secondary Structure Prediction with PDB Validation.

    PubMed

    Sakthivel, Seethalakshmi; S K M, Habeeb

    2015-01-01

    The predicted secondary structural states are not cross validated by any of the existing servers. Hence, information on the level of accuracy for every sequence is not reported by the existing servers. This was overcome by NNvPDB, which not only reported greater Q3 but also validates every prediction with the homologous PDB entries. NNvPDB is based on the concept of Neural Network, with a new and different approach of training the network every time with five PDB structures that are similar to query sequence. The average accuracy for helix is 76%, beta sheet is 71% and overall (helix, sheet and coil) is 66%. http://bit.srmuniv.ac.in/cgi-bin/bit/cfpdb/nnsecstruct.pl.

  5. Comparison of validity of mapping between drug indications and ICD-10. Direct and indirect terminology based approaches.

    PubMed

    Choi, Y; Jung, C; Chae, Y; Kang, M; Kim, J; Joung, K; Lim, J; Cho, S; Sung, S; Lee, E; Kim, S

    2014-01-01

    Mapping of drug indications to ICD-10 was undertaken in Korea by a public and a private institution for their own purposes. A different mapping approach was used by each institution, which presented a good opportunity to compare the validity of the two approaches. This study was undertaken to compare the validity of a direct mapping approach and an indirect terminology based mapping approach of drug indications against the gold standard drawn from the results of the two mapping processes. Three hundred and seventy-five cardiovascular reference drugs were selected from all listed cardiovascular drugs for the study. In the direct approach, two experienced nurse coders mapped the free text indications directly to ICD-10. In the indirect terminology based approach, the indications were extracted and coded in the Korean Standard Terminology of Medicine. These terminology coded indications were then manually mapped to ICD-10. The results of the two approaches were compared to the gold standard. A kappa statistic was calculated to see the compatibility of both mapping approaches. Recall, precision and F1 score of each mapping approach were calculated and analyzed using a paired t-test. The mean number of indications for the study drugs was 5.42. The mean number of ICD-10 codes that matched in direct approach was 46.32 and that of indirect terminology based approach was 56.94. The agreement of the mapping results between the two approaches were poor (kappa = 0.19). The indirect terminology based approach showed higher recall (86.78%) than direct approach (p < 0.001). However, there was no difference in precision and F1 score between the two approaches. Considering no differences in the F1 scores, both approaches may be used in practice for mapping drug indications to ICD-10. However, in terms of consistency, time and manpower, better results are expected from the indirect terminology based approach.

  6. Development and Validation of an Instrument for Evaluating Inquiry-Based Tasks in Science Textbooks

    ERIC Educational Resources Information Center

    Yang, Wenyuan; Liu, Enshan

    2016-01-01

    This article describes the development and validation of an instrument that can be used for content analysis of inquiry-based tasks. According to the theories of educational evaluation and qualities of inquiry, four essential functions that inquiry-based tasks should serve are defined: (1) assisting in the construction of understandings about…

  7. Creation and validation of web-based food allergy audiovisual educational materials for caregivers.

    PubMed

    Rosen, Jamie; Albin, Stephanie; Sicherer, Scott H

    2014-01-01

    Studies reveal deficits in caregivers' ability to prevent and treat food-allergic reactions with epinephrine and a consumer preference for validated educational materials in audiovisual formats. This study was designed to create brief, validated educational videos on food allergen avoidance and emergency management of anaphylaxis for caregivers of children with food allergy. The study used a stepwise iterative process including creation of a needs assessment survey consisting of 25 queries administered to caregivers and food allergy experts to identify curriculum content. Preliminary videos were drafted, reviewed, and revised based on knowledge and satisfaction surveys given to another cohort of caregivers and health care professionals. The final materials were tested for validation of their educational impact and user satisfaction using pre- and postknowledge tests and satisfaction surveys administered to a convenience sample of 50 caretakers who had not participated in the development stages. The needs assessment identified topics of importance including treatment of allergic reactions and food allergen avoidance. Caregivers in the final validation included mothers (76%), fathers (22%), and other caregivers (2%). Race/ethnicity were white (66%), black (12%), Asian (12%), Hispanic (8%), and other (2%). Knowledge tests (maximum score = 18) increased from a mean score of 12.4 preprogram to 16.7 postprogram (p < 0.0001). On a 7-point Likert scale, all satisfaction categories remained above a favorable mean score of 6, indicating participants were overall very satisfied, learned a lot, and found the materials to be informative, straightforward, helpful, and interesting. This web-based audiovisual curriculum on food allergy improved knowledge scores and was well received.

  8. Validation of a web-based questionnaire for pregnant women to assess utilization of internet: survey among an Italian sample.

    PubMed

    Siliquini, R; Saulle, R; Rabacchi, G; Bert, F; Massimi, A; Bulzomì, V; Boccia, A; La Torre, G

    2012-01-01

    Objective of this pilot study was to evaluate the reliability and validity of the web-based questionnaire in pregnant women as a tool to examine prevalence, knowledge and attitudes about internet utilization for health-related purposes, in a sample of Italian pregnant women. The questionnaire was composed by 9 sections for a total of 73 items. Reliability analysis was tested and content validity was evaluated using Cronbach's alpha to check internal consistency. Statistical analysis was performed through SPSS 13.0. Questionnaire was administered to 56 pregnant women. The higher value of Cronbach's alpha resulted on 61 items: alpha = 0.786 (n. 73 items: alpha = 0.579). High rate of pregnant women generally utilized internet (87.5%) and the 92.1% confirmed to use internet with the focus to acquire information about pregnancy (p < 0.0001). The questionnaire showed a good reliability property in the pilot study. In terms of internal consistency and validity appeared to have a good performance. Given the high prevalence of pregnant women that use internet to search information about their pregnancy status, professional healthcare workers should give advice regarding official websites where they could retrieve safe information and learn knowledge based on scientific evidence.

  9. Advances in Sprint Acceleration Profiling for Field-Based Team-Sport Athletes: Utility, Reliability, Validity and Limitations.

    PubMed

    Simperingham, Kim D; Cronin, John B; Ross, Angus

    2016-11-01

    Advanced testing technologies enable insight into the kinematic and kinetic determinants of sprint acceleration performance, which is particularly important for field-based team-sport athletes. Establishing the reliability and validity of the data, particularly from the acceleration phase, is important for determining the utility of the respective technologies. The aim of this systematic review was to explain the utility, reliability, validity and limitations of (1) radar and laser technology, and (2) non-motorised treadmill (NMT) and torque treadmill (TT) technology for providing kinematic and kinetic measures of sprint acceleration performance. A comprehensive search of the CINAHL Plus, MEDLINE (EBSCO), PubMed, SPORTDiscus, and Web of Science databases was conducted using search terms that included radar, laser, non-motorised treadmill, torque treadmill, sprint, acceleration, kinetic, kinematic, force, and power. Studies examining the kinematics or kinetics of short (≤10 s), maximal-effort sprint acceleration in adults or children, which included an assessment of reliability or validity of the advanced technologies of interest, were included in this systematic review. Absolute reliability, relative reliability and validity data were extracted from the selected articles and tabulated. The level of acceptance of reliability was a coefficient of variation (CV) ≤10 % and an intraclass correlation coefficient (ICC) or correlation coefficient (r) ≥0.70. A total of 34 studies met the inclusion criteria and were included in the qualitative analysis. Generally acceptable validity (r = 0.87-0.99; absolute bias 3-7 %), intraday reliability (CV ≤9.5 %; ICC/r ≥0.84) and interday reliability (ICC ≥0.72) were reported for data from radar and laser. However, low intraday reliability was reported for the theoretical maximum horizontal force (ICC 0.64) within adolescent athletes, and low validity was reported for velocity during the initial 5 m of a sprint

  10. [Validation of the IBS-SSS].

    PubMed

    Betz, C; Mannsdörfer, K; Bischoff, S C

    2013-10-01

    Irritable bowel syndrome (IBS) is a functional gastrointestinal disorder characterised by abdominal pain, associated with stool abnormalities and changes in stool consistency. Diagnosis of IBS is based on characteristic symptoms and exclusion of other gastrointestinal diseases. A number of questionnaires exist to assist diagnosis and assessment of severity of the disease. One of these is the irritable bowel syndrome - severity scoring system (IBS-SSS). The IBS-SSS was validated 1997 in its English version. In the present study, the IBS-SSS has been validated in German language. To do this, a cohort of 60 patients with IBS according to the Rome III criteria, was compared with a control group of healthy individuals (n = 38). We studied sensitivity and reproducibility of the score, as well as the sensitivity to detect changes of symptom severity. The results of the German validation largely reflect the results of the English validation. The German version of the IBS-SSS is also a valid, meaningful and reproducible questionnaire with a high sensitivity to assess changes in symptom severity, especially in IBS patients with moderate symptoms. It is unclear if the IBS-SSS is also a valid questionnaire in IBS patients with severe symptoms because this group of patients was not studied. © Georg Thieme Verlag KG Stuttgart · New York.

  11. Which Dimensions of Patient-Centeredness Matter? - Results of a Web-Based Expert Delphi Survey.

    PubMed

    Zill, Jördis M; Scholl, Isabelle; Härter, Martin; Dirmaier, Jörg

    2015-01-01

    Present models and definitions of patient-centeredness revealed a lack of conceptual clarity. Based on a prior systematic literature review, we developed an integrative model with 15 dimensions of patient-centeredness. The aims of this study were to 1) validate, and 2) prioritize these dimensions. A two-round web-based Delphi study was conducted. 297 international experts were invited to participate. In round one they were asked to 1) give an individual rating on a nine-point-scale on relevance and clarity of the dimensions, 2) add missing dimensions, and 3) prioritize the dimensions. In round two, experts received feedback about the results of round one and were asked to reflect and re-rate their own results. The cut-off for the validation of a dimension was a median < 7 on one of the criteria. 105 experts participated in round one and 71 in round two. In round one, one new dimension was suggested and included for discussion in round two. In round two, this dimension did not reach sufficient ratings to be included in the model. Eleven dimensions reached a median ≥ 7 on both criteria (relevance and clarity). Four dimensions had a median < 7 on one or both criteria. The five dimensions rated as most important were: patient as a unique person, patient involvement in care, patient information, clinician-patient communication and patient empowerment. 11 out of the 15 dimensions have been validated through experts' ratings. Further research on the four dimensions that received insufficient ratings is recommended. The priority order of the dimensions can help researchers and clinicians to focus on the most important dimensions of patient-centeredness. Overall, the model provides a useful framework that can be used in the development of measures, interventions, and medical education curricula, as well as the adoption of a new perspective in health policy.

  12. Validation of NH3 satellite observations by ground-based FTIR measurements

    NASA Astrophysics Data System (ADS)

    Dammers, Enrico; Palm, Mathias; Van Damme, Martin; Shephard, Mark; Cady-Pereira, Karen; Capps, Shannon; Clarisse, Lieven; Coheur, Pierre; Erisman, Jan Willem

    2016-04-01

    Global emissions of reactive nitrogen have been increasing to an unprecedented level due to human activities and are estimated to be a factor four larger than pre-industrial levels. Concentration levels of NOx are declining, but ammonia (NH3) levels are increasing around the globe. While NH3 at its current concentrations poses significant threats to the environment and human health, relatively little is known about the total budget and global distribution. Surface observations are sparse and mainly available for north-western Europe, the United States and China and are limited by the high costs and poor temporal and spatial resolution. Since the lifetime of atmospheric NH3 is short, on the order of hours to a few days, due to efficient deposition and fast conversion to particulate matter, the existing surface measurements are not sufficient to estimate global concentrations. Advanced space-based IR-sounders such as the Tropospheric Emission Spectrometer (TES), the Infrared Atmospheric Sounding Interferometer (IASI), and the Cross-track Infrared Sounder (CrIS) enable global observations of atmospheric NH3 that help overcome some of the limitations of surface observations. However, the satellite NH3 retrievals are complex requiring extensive validation. Presently there have only been a few dedicated satellite NH3 validation campaigns performed with limited spatial, vertical or temporal coverage. Recently a retrieval methodology was developed for ground-based Fourier Transform Infrared Spectroscopy (FTIR) instruments to obtain vertical concentration profiles of NH3. Here we show the applicability of retrieved columns from nine globally distributed stations with a range of NH3 pollution levels to validate satellite NH3 products.

  13. The Chemical Validation and Standardization Platform (CVSP): large-scale automated validation of chemical structure datasets.

    PubMed

    Karapetyan, Karen; Batchelor, Colin; Sharpe, David; Tkachenko, Valery; Williams, Antony J

    2015-01-01

    There are presently hundreds of online databases hosting millions of chemical compounds and associated data. As a result of the number of cheminformatics software tools that can be used to produce the data, subtle differences between the various cheminformatics platforms, as well as the naivety of the software users, there are a myriad of issues that can exist with chemical structure representations online. In order to help facilitate validation and standardization of chemical structure datasets from various sources we have delivered a freely available internet-based platform to the community for the processing of chemical compound datasets. The chemical validation and standardization platform (CVSP) both validates and standardizes chemical structure representations according to sets of systematic rules. The chemical validation algorithms detect issues with submitted molecular representations using pre-defined or user-defined dictionary-based molecular patterns that are chemically suspicious or potentially requiring manual review. Each identified issue is assigned one of three levels of severity - Information, Warning, and Error - in order to conveniently inform the user of the need to browse and review subsets of their data. The validation process includes validation of atoms and bonds (e.g., making aware of query atoms and bonds), valences, and stereo. The standard form of submission of collections of data, the SDF file, allows the user to map the data fields to predefined CVSP fields for the purpose of cross-validating associated SMILES and InChIs with the connection tables contained within the SDF file. This platform has been applied to the analysis of a large number of data sets prepared for deposition to our ChemSpider database and in preparation of data for the Open PHACTS project. In this work we review the results of the automated validation of the DrugBank dataset, a popular drug and drug target database utilized by the community, and ChEMBL 17 data set

  14. Is Earth-based scaling a valid procedure for calculating heat flows for Mars?

    NASA Astrophysics Data System (ADS)

    Ruiz, Javier; Williams, Jean-Pierre; Dohm, James M.; Fernández, Carlos; López, Valle

    2013-09-01

    Heat flow is a very important parameter for constraining the thermal evolution of a planetary body. Several procedures for calculating heat flows for Mars from geophysical or geological proxies have been used, which are valid for the time when the structures used as indicators were formed. The more common procedures are based on estimates of lithospheric strength (the effective elastic thickness of the lithosphere or the depth to the brittle-ductile transition). On the other hand, several works by Kargel and co-workers have estimated martian heat flows from scaling the present-day terrestrial heat flow to Mars, but the so-obtained values are much higher than those deduced from lithospheric strength. In order to explain the discrepancy, a recent paper by Rodriguez et al. (Rodriguez, J.A.P., Kargel, J.S., Tanaka, K.L., Crown, D.A., Berman, D.C., Fairén, A.G., Baker, V.R., Furfaro, R., Candelaria, P., Sasaki, S. [2011]. Icarus 213, 150-194) criticized the heat flow calculations for ancient Mars presented by Ruiz et al. (Ruiz, J., Williams, J.-P., Dohm, J.M., Fernández, C., López, V. [2009]. Icarus 207, 631-637) and other studies calculating ancient martian heat flows from lithospheric strength estimates, and casted doubts on the validity of the results obtained by these works. Here however we demonstrate that the discrepancy is due to computational and conceptual errors made by Kargel and co-workers, and we conclude that the scaling from terrestrial heat flow values is not a valid procedure for estimating reliable heat flows for Mars.

  15. Optimization and validation of moving average quality control procedures using bias detection curves and moving average validation charts.

    PubMed

    van Rossum, Huub H; Kemperman, Hans

    2017-02-01

    To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.

  16. How credible are the study results? Evaluating and applying internal validity tools to literature-based assessments of environmental health hazards

    PubMed Central

    Rooney, Andrew A.; Cooper, Glinda S.; Jahnke, Gloria D.; Lam, Juleen; Morgan, Rebecca L.; Boyles, Abee L.; Ratcliffe, Jennifer M.; Kraft, Andrew D.; Schünemann, Holger J.; Schwingl, Pamela; Walker, Teneille D.; Thayer, Kristina A.; Lunn, Ruth M.

    2016-01-01

    Environmental health hazard assessments are routinely relied upon for public health decision-making. The evidence base used in these assessments is typically developed from a collection of diverse sources of information of varying quality. It is critical that literature-based evaluations consider the credibility of individual studies used to reach conclusions through consistent, transparent and accepted methods. Systematic review procedures address study credibility by assessing internal validity or “risk of bias” — the assessment of whether the design and conduct of a study compromised the credibility of the link between exposure/intervention and outcome. This paper describes the commonalities and differences in risk-of-bias methods developed or used by five groups that conduct or provide methodological input for performing environmental health hazard assessments: the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) Working Group, the Navigation Guide, the National Toxicology Program’s (NTP) Office of Health Assessment and Translation (OHAT) and Office of the Report on Carcinogens (ORoC), and the Integrated Risk Information System of the U.S. Environmental Protection Agency (EPA-IRIS). Each of these groups have been developing and applying rigorous assessment methods for integrating across a heterogeneous collection of human and animal studies to inform conclusions on potential environmental health hazards. There is substantial consistency across the groups in the consideration of risk-of-bias issues or “domains” for assessing observational human studies. There is a similar overlap in terms of domains addressed for animal studies; however, the groups differ in the relative emphasis placed on different aspects of risk of bias. Future directions for the continued harmonization and improvement of these methods are also discussed. PMID:26857180

  17. How credible are the study results? Evaluating and applying internal validity tools to literature-based assessments of environmental health hazards.

    PubMed

    Rooney, Andrew A; Cooper, Glinda S; Jahnke, Gloria D; Lam, Juleen; Morgan, Rebecca L; Boyles, Abee L; Ratcliffe, Jennifer M; Kraft, Andrew D; Schünemann, Holger J; Schwingl, Pamela; Walker, Teneille D; Thayer, Kristina A; Lunn, Ruth M

    2016-01-01

    Environmental health hazard assessments are routinely relied upon for public health decision-making. The evidence base used in these assessments is typically developed from a collection of diverse sources of information of varying quality. It is critical that literature-based evaluations consider the credibility of individual studies used to reach conclusions through consistent, transparent and accepted methods. Systematic review procedures address study credibility by assessing internal validity or "risk of bias" - the assessment of whether the design and conduct of a study compromised the credibility of the link between exposure/intervention and outcome. This paper describes the commonalities and differences in risk-of-bias methods developed or used by five groups that conduct or provide methodological input for performing environmental health hazard assessments: the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) Working Group, the Navigation Guide, the National Toxicology Program's (NTP) Office of Health Assessment and Translation (OHAT) and Office of the Report on Carcinogens (ORoC), and the Integrated Risk Information System of the U.S. Environmental Protection Agency (EPA-IRIS). Each of these groups have been developing and applying rigorous assessment methods for integrating across a heterogeneous collection of human and animal studies to inform conclusions on potential environmental health hazards. There is substantial consistency across the groups in the consideration of risk-of-bias issues or "domains" for assessing observational human studies. There is a similar overlap in terms of domains addressed for animal studies; however, the groups differ in the relative emphasis placed on different aspects of risk of bias. Future directions for the continued harmonization and improvement of these methods are also discussed. Published by Elsevier Ltd.

  18. 40 CFR 1065.550 - Gas analyzer range validation and drift validation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... a dry sample measured with a CLD and the removed water is corrected based on measured CO2, CO, THC... may not validate the concentration subcomponents (e.g., THC and CH4 for NMHC) separately. For example, for NMHC measurements, perform drift validation on NMHC; do not validate THC and CH4 separately. (2...

  19. 40 CFR 1065.550 - Gas analyzer range validation and drift validation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... a dry sample measured with a CLD and the removed water is corrected based on measured CO2, CO, THC... may not validate the concentration subcomponents (e.g., THC and CH4 for NMHC) separately. For example, for NMHC measurements, perform drift validation on NMHC; do not validate THC and CH4 separately. (2...

  20. Youth Oriented Activity Trackers: Comprehensive Laboratory- and Field-Based Validation

    PubMed Central

    2017-01-01

    Background Commercial activity trackers are growing in popularity among adults and some are beginning to be marketed to children. There is, however, a paucity of independent research examining the validity of these devices to detect physical activity of different intensity levels. Objectives The purpose of this study was to determine the validity of the output from 3 commercial youth-oriented activity trackers in 3 phases: (1) orbital shaker, (2) structured indoor activities, and (3) 4 days of free-living activity. Methods Four units of each activity tracker (Movband [MB], Sqord [SQ], and Zamzee [ZZ]) were tested in an orbital shaker for 5-minutes at three frequencies (1.3, 1.9, and 2.5 Hz). Participants for Phase 2 (N=14) and Phase 3 (N=16) were 6-12 year old children (50% male). For Phase 2, participants completed 9 structured activities while wearing each tracker, the ActiGraph GT3X+ (AG) research accelerometer, and a portable indirect calorimetry system to assess energy expenditure (EE). For Phase 3, participants wore all 4 devices for 4 consecutive days. Correlation coefficients, linear models, and non-parametric statistics evaluated the criterion and construct validity of the activity tracker output. Results Output from all devices was significantly associated with oscillation frequency (r=.92-.99). During Phase 2, MB and ZZ only differentiated sedentary from light intensity (P<.01), whereas the SQ significantly differentiated among all intensity categories (all comparisons P<.01), similar to AG and EE. During Phase 3, AG counts were significantly associated with activity tracker output (r=.76, .86, and .59 for the MB, SQ, and ZZ, respectively). Conclusions Across study phases, the SQ demonstrated stronger validity than the MB and ZZ. The validity of youth-oriented activity trackers may directly impact their effectiveness as behavior modification tools, demonstrating a need for more research on such devices. PMID:28724509

  1. Multi-Evaporator Miniature Loop Heat Pipe for Small Spacecraft Thermal Control. Part 2; Validation Results

    NASA Technical Reports Server (NTRS)

    Ku, Jentung; Ottenstein, Laura; Douglas, Donya; Hoang, Triem

    2010-01-01

    Under NASA s New Millennium Program Space Technology 8 (ST 8) Project, Goddard Space Fight Center has conducted a Thermal Loop experiment to advance the maturity of the Thermal Loop technology from proof of concept to prototype demonstration in a relevant environment , i.e. from a technology readiness level (TRL) of 3 to a level of 6. The thermal Loop is an advanced thermal control system consisting of a miniature loop heat pipe (MLHP) with multiple evaporators and multiple condensers designed for future small system applications requiring low mass, low power, and compactness. The MLHP retains all features of state-of-the-art loop heat pipes (LHPs) and offers additional advantages to enhance the functionality, performance, versatility, and reliability of the system. An MLHP breadboard was built and tested in the laboratory and thermal vacuum environments for the TRL 4 and TRL 5 validations, respectively, and an MLHP proto-flight unit was built and tested in a thermal vacuum chamber for the TRL 6 validation. In addition, an analytical model was developed to simulate the steady state and transient behaviors of the MLHP during various validation tests. The MLHP demonstrated excellent performance during experimental tests and the analytical model predictions agreed very well with experimental data. All success criteria at various TRLs were met. Hence, the Thermal Loop technology has reached a TRL of 6. This paper presents the validation results, both experimental and analytical, of such a technology development effort.

  2. Construct validity of individual and summary performance metrics associated with a computer-based laparoscopic simulator.

    PubMed

    Rivard, Justin D; Vergis, Ashley S; Unger, Bertram J; Hardy, Krista M; Andrew, Chris G; Gillman, Lawrence M; Park, Jason

    2014-06-01

    Computer-based surgical simulators capture a multitude of metrics based on different aspects of performance, such as speed, accuracy, and movement efficiency. However, without rigorous assessment, it may be unclear whether all, some, or none of these metrics actually reflect technical skill, which can compromise educational efforts on these simulators. We assessed the construct validity of individual performance metrics on the LapVR simulator (Immersion Medical, San Jose, CA, USA) and used these data to create task-specific summary metrics. Medical students with no prior laparoscopic experience (novices, N = 12), junior surgical residents with some laparoscopic experience (intermediates, N = 12), and experienced surgeons (experts, N = 11) all completed three repetitions of four LapVR simulator tasks. The tasks included three basic skills (peg transfer, cutting, clipping) and one procedural skill (adhesiolysis). We selected 36 individual metrics on the four tasks that assessed six different aspects of performance, including speed, motion path length, respect for tissue, accuracy, task-specific errors, and successful task completion. Four of seven individual metrics assessed for peg transfer, six of ten metrics for cutting, four of nine metrics for clipping, and three of ten metrics for adhesiolysis discriminated between experience levels. Time and motion path length were significant on all four tasks. We used the validated individual metrics to create summary equations for each task, which successfully distinguished between the different experience levels. Educators should maintain some skepticism when reviewing the plethora of metrics captured by computer-based simulators, as some but not all are valid. We showed the construct validity of a limited number of individual metrics and developed summary metrics for the LapVR. The summary metrics provide a succinct way of assessing skill with a single metric for each task, but require further validation.

  3. Evaluation of passenger health risk assessment of sustainable indoor air quality monitoring in metro systems based on a non-Gaussian dynamic sensor validation method.

    PubMed

    Kim, MinJeong; Liu, Hongbin; Kim, Jeong Tai; Yoo, ChangKyoo

    2014-08-15

    Sensor faults in metro systems provide incorrect information to indoor air quality (IAQ) ventilation systems, resulting in the miss-operation of ventilation systems and adverse effects on passenger health. In this study, a new sensor validation method is proposed to (1) detect, identify and repair sensor faults and (2) evaluate the influence of sensor reliability on passenger health risk. To address the dynamic non-Gaussianity problem of IAQ data, dynamic independent component analysis (DICA) is used. To detect and identify sensor faults, the DICA-based squared prediction error and sensor validity index are used, respectively. To restore the faults to normal measurements, a DICA-based iterative reconstruction algorithm is proposed. The comprehensive indoor air-quality index (CIAI) that evaluates the influence of the current IAQ on passenger health is then compared using the faulty and reconstructed IAQ data sets. Experimental results from a metro station showed that the DICA-based method can produce an improved IAQ level in the metro station and reduce passenger health risk since it more accurately validates sensor faults than do conventional methods. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Validation of Web-Based Physical Activity Measurement Systems Using Doubly Labeled Water

    PubMed Central

    Yamaguchi, Yukio; Yamada, Yosuke; Tokushima, Satoru; Hatamoto, Yoichi; Sagayama, Hiroyuki; Kimura, Misaka; Higaki, Yasuki; Tanaka, Hiroaki

    2012-01-01

    Background Online or Web-based measurement systems have been proposed as convenient methods for collecting physical activity data. We developed two Web-based physical activity systems—the 24-hour Physical Activity Record Web (24hPAR WEB) and 7 days Recall Web (7daysRecall WEB). Objective To examine the validity of two Web-based physical activity measurement systems using the doubly labeled water (DLW) method. Methods We assessed the validity of the 24hPAR WEB and 7daysRecall WEB in 20 individuals, aged 25 to 61 years. The order of email distribution and subsequent completion of the two Web-based measurements systems was randomized. Each measurement tool was used for a week. The participants’ activity energy expenditure (AEE) and total energy expenditure (TEE) were assessed over each week using the DLW method and compared with the respective energy expenditures estimated using the Web-based systems. Results The mean AEE was 3.90 (SD 1.43) MJ estimated using the 24hPAR WEB and 3.67 (SD 1.48) MJ measured by the DLW method. The Pearson correlation for AEE between the two methods was r = .679 (P < .001). The Bland-Altman 95% limits of agreement ranged from –2.10 to 2.57 MJ between the two methods. The Pearson correlation for TEE between the two methods was r = .874 (P < .001). The mean AEE was 4.29 (SD 1.94) MJ using the 7daysRecall WEB and 3.80 (SD 1.36) MJ by the DLW method. The Pearson correlation for AEE between the two methods was r = .144 (P = .54). The Bland-Altman 95% limits of agreement ranged from –3.83 to 4.81 MJ between the two methods. The Pearson correlation for TEE between the two methods was r = .590 (P = .006). The average input times using terminal devices were 8 minutes and 10 seconds for the 24hPAR WEB and 6 minutes and 38 seconds for the 7daysRecall WEB. Conclusions Both Web-based systems were found to be effective methods for collecting physical activity data and are appropriate for use in epidemiological studies. Because the measurement

  5. Model-Based Verification and Validation of Spacecraft Avionics

    NASA Technical Reports Server (NTRS)

    Khan, Mohammed Omair

    2012-01-01

    Our simulation was able to mimic the results of 30 tests on the actual hardware. This shows that simulations have the potential to enable early design validation - well before actual hardware exists. Although simulations focused around data processing procedures at subsystem and device level, they can also be applied to system level analysis to simulate mission scenarios and consumable tracking (e.g. power, propellant, etc.). Simulation engine plug-in developments are continually improving the product, but handling time for time-sensitive operations (like those of the remote engineering unit and bus controller) can be cumbersome.

  6. Statistical Validation for Clinical Measures: Repeatability and Agreement of Kinect™-Based Software

    PubMed Central

    Tello, Emanuel; Rodrigo, Alejandro; Valentinuzzi, Max E.

    2018-01-01

    Background The rehabilitation process is a fundamental stage for recovery of people's capabilities. However, the evaluation of the process is performed by physiatrists and medical doctors, mostly based on their observations, that is, a subjective appreciation of the patient's evolution. This paper proposes a tracking platform of the movement made by an individual's upper limb using Kinect sensor(s) to be applied for the patient during the rehabilitation process. The main contribution is the development of quantifying software and the statistical validation of its performance, repeatability, and clinical use in the rehabilitation process. Methods The software determines joint angles and upper limb trajectories for the construction of a specific rehabilitation protocol and quantifies the treatment evolution. In turn, the information is presented via a graphical interface that allows the recording, storage, and report of the patient's data. For clinical purposes, the software information is statistically validated with three different methodologies, comparing the measures with a goniometer in terms of agreement and repeatability. Results The agreement of joint angles measured with the proposed software and goniometer is evaluated with Bland-Altman plots; all measurements fell well within the limits of agreement, meaning interchangeability of both techniques. Additionally, the results of Bland-Altman analysis of repeatability show 95% confidence. Finally, the physiotherapists' qualitative assessment shows encouraging results for the clinical use. Conclusion The main conclusion is that the software is capable of offering a clinical history of the patient and is useful for quantification of the rehabilitation success. The simplicity, low cost, and visualization possibilities enhance the use of the software Kinect for rehabilitation and other applications, and the expert's opinion endorses the choice of our approach for clinical practice. Comparison of the new measurement

  7. Predictive and Incremental Validity of Global and Domain-Based Adolescent Life Satisfaction Reports

    ERIC Educational Resources Information Center

    Haranin, Emily C.; Huebner, E. Scott; Suldo, Shannon M.

    2007-01-01

    Concurrent, predictive, and incremental validity of global and domain-based adolescent life satisfaction reports are examined with respect to internalizing and externalizing behavior problems. The Students' Life Satisfaction Scale (SLSS), Multidimensional Students' Life Satisfaction Scale (MSLSS), and measures of internalizing and externalizing…

  8. Dust transport model validation using satellite- and ground-based methods in the southwestern United States

    NASA Astrophysics Data System (ADS)

    Mahler, Anna-Britt; Thome, Kurt; Yin, Dazhong; Sprigg, William A.

    2006-08-01

    Dust is known to aggravate respiratory diseases. This is an issue in the desert southwestern United States, where windblown dust events are common. The Public Health Applications in Remote Sensing (PHAiRS) project aims to address this problem by using remote-sensing products to assist in public health decision support. As part of PHAiRS, a model for simulating desert dust cycles, the Dust Regional Atmospheric Modeling (DREAM) system is employed to forecast dust events in the southwestern US. Thus far, DREAM has been validated in the southwestern US only in the lower part of the atmosphere by comparison with measurement and analysis products from surface synoptic, surface Meteorological Aerodrome Report (METAR), and upper-air radiosonde. This study examines the validity of the DREAM algorithm dust load prediction in the desert southwestern United States by comparison with satellite-based MODIS level 2 and MODIS Deep Blue aerosol products, and ground-based observations from the AERONET network of sunphotometers. Results indicate that there are difficulties obtaining MODIS L2 aerosol optical thickness (AOT) data in the desert southwest due to low AOT algorithm performance over areas with high surface reflectances. MODIS Deep Blue aerosol products show improvement, but the temporal and vertical resolution of MODIS data limit its utility for DREAM evaluation. AERONET AOT data show low correlation to DREAM dust load predictions. The potential contribution of space- or ground-based lidar to the PHAiRS project is also examined.

  9. Agreeing on Validity Arguments

    ERIC Educational Resources Information Center

    Sireci, Stephen G.

    2013-01-01

    Kane (this issue) presents a comprehensive review of validity theory and reminds us that the focus of validation is on test score interpretations and use. In reacting to his article, I support the argument-based approach to validity and all of the major points regarding validation made by Dr. Kane. In addition, I call for a simpler, three-step…

  10. ENSURF: multi-model sea level forecast - implementation and validation results for the IBIROOS and Western Mediterranean regions

    NASA Astrophysics Data System (ADS)

    Pérez, B.; Brower, R.; Beckers, J.; Paradis, D.; Balseiro, C.; Lyons, K.; Cure, M.; Sotillo, M. G.; Hacket, B.; Verlaan, M.; Alvarez Fanjul, E.

    2011-04-01

    ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast that makes use of existing storm surge or circulation models today operational in Europe, as well as near-real time tide gauge data in the region, with the following main goals: - providing an easy access to existing forecasts, as well as to its performance and model validation, by means of an adequate visualization tool - generation of better forecasts of sea level, including confidence intervals, by means of the Bayesian Model Average Technique (BMA) The system was developed and implemented within ECOOP (C.No. 036355) European Project for the NOOS and the IBIROOS regions, based on MATROOS visualization tool developed by Deltares. Both systems are today operational at Deltares and Puertos del Estado respectively. The Bayesian Modelling Average technique generates an overall forecast probability density function (PDF) by making a weighted average of the individual forecasts PDF's; the weights represent the probability that a model will give the correct forecast PDF and are determined and updated operationally based on the performance of the models during a recent training period. This implies the technique needs the availability of sea level data from tide gauges in near-real time. Results of validation of the different models and BMA implementation for the main harbours will be presented for the IBIROOS and Western Mediterranean regions, where this kind of activity is performed for the first time. The work has proved to be useful to detect problems in some of the circulation models not previously well calibrated with sea level data, to identify the differences on baroclinic and barotropic models for sea level applications and to confirm the general improvement of the BMA forecasts.

  11. The Turkish Version of Web-Based Learning Platform Evaluation Scale: Reliability and Validity Study

    ERIC Educational Resources Information Center

    Dag, Funda

    2016-01-01

    The purpose of this study is to determine the language equivalence and the validity and reliability of the Turkish version of the "Web-Based Learning Platform Evaluation Scale" ("Web Tabanli Ögrenme Ortami Degerlendirme Ölçegi" [WTÖODÖ]) used in the selection and evaluation of web-based learning environments. Within this scope,…

  12. SU-E-T-129: Are Knowledge-Based Planning Dose Estimates Valid for Distensible Organs?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lalonde, R; Heron, D; Huq, M

    2015-06-15

    Purpose: Knowledge-based planning programs have become available to assist treatment planning in radiation therapy. Such programs can be used to generate estimated DVHs and planning constraints for organs at risk (OARs), based upon a model generated from previous plans. These estimates are based upon the planning CT scan. However, for distensible OARs like the bladder and rectum, daily variations in volume may make the dose estimates invalid. The purpose of this study is to determine whether knowledge-based DVH dose estimates may be valid for distensible OARs. Methods: The Varian RapidPlan™ knowledge-based planning module was used to generate OAR dose estimatesmore » and planning objectives for 10 prostate cases previously planned with VMAT, and final plans were calculated for each. Five weekly setup CBCT scans of each patient were then downloaded and contoured (assuming no change in size and shape of the target volume), and rectum and bladder DVHs were recalculated for each scan. Dose volumes were then compared at 75, 60,and 40 Gy for the bladder and rectum between the planning scan and the CBCTs. Results: Plan doses and estimates matched well at all dose points., Volumes of the rectum and bladder varied widely between planning CT and the CBCTs, ranging from 0.46 to 2.42 for the bladder and 0.71 to 2.18 for the rectum, causing relative dose volumes to vary between planning CT and CBCT, but absolute dose volumes were more consistent. The overall ratio of CBCT/plan dose volumes was 1.02 ±0.27 for rectum and 0.98 ±0.20 for bladder in these patients. Conclusion: Knowledge-based planning dose volume estimates for distensible OARs are still valid, in absolute volume terms, between treatment planning scans and CBCT’s taken during daily treatment. Further analysis of the data is being undertaken to determine how differences depend upon rectum and bladder filling state. This work has been supported by Varian Medical Systems.« less

  13. Validated Smartphone-Based Apps for Ear and Hearing Assessments: A Review

    PubMed Central

    Pallawela, Danuk

    2016-01-01

    Background An estimated 360 million people have a disabling hearing impairment globally, the vast majority of whom live in low- and middle-income countries (LMICs). Early identification through screening is important to negate the negative effects of untreated hearing impairment. Substantial barriers exist in screening for hearing impairment in LMICs, such as the requirement for skilled hearing health care professionals and prohibitively expensive specialist equipment to measure hearing. These challenges may be overcome through utilization of increasingly available smartphone app technologies for ear and hearing assessments that are easy to use by unskilled professionals. Objective Our objective was to identify and compare available apps for ear and hearing assessments and consider the incorporation of such apps into hearing screening programs Methods In July 2015, the commercial app stores Google Play and Apple App Store were searched to identify apps for ear and hearing assessments. Thereafter, six databases (EMBASE, MEDLINE, Global Health, Web of Science, CINAHL, and mHealth Evidence) were searched to assess which of the apps identified in the commercial review had been validated against gold standard measures. A comparison was made between validated apps. Results App store search queries returned 30 apps that could be used for ear and hearing assessments, the majority of which are for performing audiometry. The literature search identified 11 eligible validity studies that examined 6 different apps. uHear, an app for self-administered audiometry, was validated in the highest number of peer reviewed studies against gold standard pure tone audiometry (n=5). However, the accuracy of uHear varied across these studies. Conclusions Very few of the available apps have been validated in peer-reviewed studies. Of the apps that have been validated, further independent research is required to fully understand their accuracy at detecting ear and hearing conditions. PMID

  14. Validity evidence for an OSCE to assess competency in systems-based practice and practice-based learning and improvement: a preliminary investigation.

    PubMed

    Varkey, Prathibha; Natt, Neena; Lesnick, Timothy; Downing, Steven; Yudkowsky, Rachel

    2008-08-01

    To determine the psychometric properties and validity of an OSCE to assess the competencies of Practice-Based Learning and Improvement (PBLI) and Systems-Based Practice (SBP) in graduate medical education. An eight-station OSCE was piloted at the end of a three-week Quality Improvement elective for nine preventive medicine and endocrinology fellows at Mayo Clinic. The stations assessed performance in quality measurement, root cause analysis, evidence-based medicine, insurance systems, team collaboration, prescription errors, Nolan's model, and negotiation. Fellows' performance in each of the stations was assessed by three faculty experts using checklists and a five-point global competency scale. A modified Angoff procedure was used to set standards. Evidence for the OSCE's validity, feasibility, and acceptability was gathered. Evidence for content and response process validity was judged as excellent by institutional content experts. Interrater reliability of scores ranged from 0.85 to 1 for most stations. Interstation correlation coefficients ranged from -0.62 to 0.99, reflecting case specificity. Implementation cost was approximately $255 per fellow. All faculty members agreed that the OSCE was realistic and capable of providing accurate assessments. The OSCE provides an opportunity to systematically sample the different subdomains of Quality Improvement. Furthermore, the OSCE provides an opportunity for the demonstration of skills rather than the testing of knowledge alone, thus making it a potentially powerful assessment tool for SBP and PBLI. The study OSCE was well suited to assess SBP and PBLI. The evidence gathered through this study lays the foundation for future validation work.

  15. Computer simulation of Cerebral Arteriovenous Malformation-validation analysis of hemodynamics parameters.

    PubMed

    Kumar, Y Kiran; Mehta, Shashi Bhushan; Ramachandra, Manjunath

    2017-01-01

    The purpose of this work is to provide some validation methods for evaluating the hemodynamic assessment of Cerebral Arteriovenous Malformation (CAVM). This article emphasizes the importance of validating noninvasive measurements for CAVM patients, which are designed using lumped models for complex vessel structure. The validation of the hemodynamics assessment is based on invasive clinical measurements and cross-validation techniques with the Philips proprietary validated software's Qflow and 2D Perfursion. The modeling results are validated for 30 CAVM patients for 150 vessel locations. Mean flow, diameter, and pressure were compared between modeling results and with clinical/cross validation measurements, using an independent two-tailed Student t test. Exponential regression analysis was used to assess the relationship between blood flow, vessel diameter, and pressure between them. Univariate analysis is used to assess the relationship between vessel diameter, vessel cross-sectional area, AVM volume, AVM pressure, and AVM flow results were performed with linear or exponential regression. Modeling results were compared with clinical measurements from vessel locations of cerebral regions. Also, the model is cross validated with Philips proprietary validated software's Qflow and 2D Perfursion. Our results shows that modeling results and clinical results are nearly matching with a small deviation. In this article, we have validated our modeling results with clinical measurements. The new approach for cross-validation is proposed by demonstrating the accuracy of our results with a validated product in a clinical environment.

  16. Measuring limitations in activities of daily living: a population-based validation of a short questionnaire.

    PubMed

    Elfering, Achim; Cronenberg, Sonja; Grebner, Simone; Tamcan, Oezguer; Müller, Urs

    2017-12-01

    A newly developed questionnaire assessing limitations in activity of daily living (LADL-Q) that should improve assessment of LADL is tested in a large population-based validation study. This survey was paper-based. Overall, 16,634 individuals who were representative of the working population in the German-speaking part of Switzerland participated in the study. Item analysis was used the final version of the LADL-Q to four items per subscale that correspond to potential problems in three body regions (back and neck, upper extremities, lower extremities). Analysis included tests for reliability, internal consistency, dimensionality and convergent validity. Test-retest reliability coefficients after 2 weeks ranged from 0.82 to 0.99 (Mdn = 0.87), with no item having a coefficient below 0.60. The median item-total coefficients ranged between moderate and good. Correlation coefficients between LADL-Q subscales and three validated clinical instruments (Western Ontario and McMaster Universities osteoarthritis index, shoulder pain disability index, Oswestry) ranged from 0.63 to 0.81. In structural equation modeling the three subscales were significantly related with two important outcomes in occupational rehabilitation: self-reported general health and daily task performance. The new LADL-Q is a brief, reliable and valid tool for assessment of LADL in studies on musculoskeletal health.

  17. [Social consequence of a dysphonic voice, design and validation of a questionnaire and first results].

    PubMed

    Revis, J; Robieux, C; Ghio, A; Giovanni, A

    2013-01-01

    In our society, based on communication, dysphonia becomes a handicap that could be responsible of work discrimination. Actually, several commercial services are provided by phone only, and voice quality is mandatory for the employees. This work aim was to determine the social picture relayed by dysphonia. Our hypothesis was that dysphonia sounds pejorative compared to normal voice. 40 voice samples (30 dysphonic and 10 normal) were presented randomly to a perceptual jury of 20 naïve listener. The task was for each of them to fill a questionnaire, designed specifically to describe the speaker's look and personality. 20 items were evaluated, divided into 4 categories: health, temperament, appearance, and way of life. The results showed significant differences between normal subjects and dysphonic patients. For instance, the pathological voices were depicted as more tired, introverted, sloppy than normal voices, and less trustable. No significant differences were found according to the severity of voice disorders. This work is presently continued. It allowed to validate our questionnaire and has offers great perspectives on patient's management and voice therapy.

  18. Soil Moisture Active Passive Satellite Status and Recent Validation Results

    USDA-ARS?s Scientific Manuscript database

    The Soil Moisture Active Passive (SMAP) mission was launched in January, 2015 and began its calibration and validation (cal/val) phase in May, 2015. Cal/Val will begin with a focus on instrument measurements, brightness temperature and backscatter, and evolve to the geophysical products that include...

  19. Validation of GPU based TomoTherapy dose calculation engine.

    PubMed

    Chen, Quan; Lu, Weiguo; Chen, Yu; Chen, Mingli; Henderson, Douglas; Sterpin, Edmond

    2012-04-01

    The graphic processing unit (GPU) based TomoTherapy convolution/superposition(C/S) dose engine (GPU dose engine) achieves a dramatic performance improvement over the traditional CPU-cluster based TomoTherapy dose engine (CPU dose engine). Besides the architecture difference between the GPU and CPU, there are several algorithm changes from the CPU dose engine to the GPU dose engine. These changes made the GPU dose slightly different from the CPU-cluster dose. In order for the commercial release of the GPU dose engine, its accuracy has to be validated. Thirty eight TomoTherapy phantom plans and 19 patient plans were calculated with both dose engines to evaluate the equivalency between the two dose engines. Gamma indices (Γ) were used for the equivalency evaluation. The GPU dose was further verified with the absolute point dose measurement with ion chamber and film measurements for phantom plans. Monte Carlo calculation was used as a reference for both dose engines in the accuracy evaluation in heterogeneous phantom and actual patients. The GPU dose engine showed excellent agreement with the current CPU dose engine. The majority of cases had over 99.99% of voxels with Γ(1%, 1 mm) < 1. The worst case observed in the phantom had 0.22% voxels violating the criterion. In patient cases, the worst percentage of voxels violating the criterion was 0.57%. For absolute point dose verification, all cases agreed with measurement to within ±3% with average error magnitude within 1%. All cases passed the acceptance criterion that more than 95% of the pixels have Γ(3%, 3 mm) < 1 in film measurement, and the average passing pixel percentage is 98.5%-99%. The GPU dose engine also showed similar degree of accuracy in heterogeneous media as the current TomoTherapy dose engine. It is verified and validated that the ultrafast TomoTherapy GPU dose engine can safely replace the existing TomoTherapy cluster based dose engine without degradation in dose accuracy.

  20. A Compact Forearm Crutch Based on Force Sensors for Aided Gait: Reliability and Validity

    PubMed Central

    Chamorro-Moriana, Gema; Sevillano, José Luis; Ridao-Fernández, Carmen

    2016-01-01

    Frequently, patients who suffer injuries in some lower member require forearm crutches in order to partially unload weight-bearing. These lesions cause pain in lower limb unloading and their progression should be controlled objectively to avoid significant errors in accuracy and, consequently, complications and after effects in lesions. The design of a new and feasible tool that allows us to control and improve the accuracy of loads exerted on crutches during aided gait is necessary, so as to unburden the lower limbs. In this paper, we describe such a system based on a force sensor, which we have named the GCH System 2.0. Furthermore, we determine the validity and reliability of measurements obtained using this tool via a comparison with the validated AMTI (Advanced Mechanical Technology, Inc., Watertown, MA, USA) OR6-7-2000 Platform. An intra-class correlation coefficient demonstrated excellent agreement between the AMTI Platform and the GCH System. A regression line to determine the predictive ability of the GCH system towards the AMTI Platform was found, which obtained a precision of 99.3%. A detailed statistical analysis is presented for all the measurements and also segregated for several requested loads on the crutches (10%, 25% and 50% of body weight). Our results show that our system, designed for assessing loads exerted by patients on forearm crutches during assisted gait, provides valid and reliable measurements of loads. PMID:27338396

  1. Simple SNP-based minimal marker genotyping for Humulus lupulus L. identification and variety validation.

    PubMed

    Henning, John A; Coggins, Jamie; Peterson, Matthew

    2015-10-06

    in the study to be genetically differentiated from each other. Next, we compared genetic matrices calculated from the minimal marker sets [(Table 2; 6-, 7-, 8-, 10- and 12-marker set matrices] and that of a matrix calculated from a set of markers with no missing data across all 116 samples (1006 SNP markers). The minimum number of markers required to meet both specifications was a set of 7-markers (Table 3). These seven SNPs were then aligned with a genome assembly, and DNA sequence both upstream and downstream were used to identify primer sequences that can be used to develop seven amplicons for high resolution melting curve PCR detection or other SNP-based PCR detection methods. This study identifies a set of 7 SNP markers that may prove useful for the identification and validation of hop varieties and accessions. Variety validation of unknown samples assumes that the variety under question has been included a priori in a discovery panel. These results are based upon in silica studies and markers need to be validated using different SNP marker technology upon a differential set of hop genotypes. The marker sequence data and suggested primer sets provide potential means to fingerprint hop varieties in most genetic laboratories utilizing SNP-marker technology.

  2. A Model-Based Method for Content Validation of Automatically Generated Test Items

    ERIC Educational Resources Information Center

    Zhang, Xinxin; Gierl, Mark

    2016-01-01

    The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…

  3. A formal approach to validation and verification for knowledge-based control systems

    NASA Technical Reports Server (NTRS)

    Castore, Glen

    1987-01-01

    As control systems become more complex in response to desires for greater system flexibility, performance and reliability, the promise is held out that artificial intelligence might provide the means for building such systems. An obstacle to the use of symbolic processing constructs in this domain is the need for verification and validation (V and V) of the systems. Techniques currently in use do not seem appropriate for knowledge-based software. An outline of a formal approach to V and V for knowledge-based control systems is presented.

  4. Validation of Erosion 3D in Lower Saxony - Comparison between modelled soil erosion events and results of a long term monitoring project

    NASA Astrophysics Data System (ADS)

    Bug, Jan; Mosimann, Thomas

    2013-04-01

    Since 2000 water erosion has been surveyed on 400 ha arable land in three different regions of Lower Saxony (Mosimann et al. 2009). The results of this long-term survey are used for the validation of the soil erosion models such as USLE and Erosion 3D. The validation of the physically-based model Erosion 3D (Schmidt & Werner 2000) is possible because the survey analyses the effects (soil loss, sediment yield, deposition on site) of single thunder storm events and also maps major factors of soil erosion (soil, crop, tillage). A 12.5 m Raster DEM was used to model the soil erosion events.Rainfalldata was acquired from climate stations. Soil and landuse parameters were derived from the "Parameterkatalog Sachsen"(Michael et al. 1996). During thirteen years of monitoring, high intensity storms fell less frequently than expected. High intensity rainfalls with a return period of five or ten years usually occurred during periods of maximum plant cover.Winter events were ruled out because dataon snow melt and rainfallwere not measured. The validation is therefore restricted to 80 events. The validation consists of three parts. The first part compares the spatial distribution of the mapped soil erosion with the model results. The second part calculates the difference in the amount of redistributed soil. The third part analyses off-site effects such as sediment yield and pollution of water bodies. The validation shows that the overall result of erosion 3D is quite good. Spatial hotspots of soil erosion and of off-site effects are predicted correctly in most cases. However, quantitative comparison is more problematic, because the mapping allows only the quantification of rillerosion and not of sheet erosion. So as a rule,the predicted soil loss is higher than the mapped. The prediction of rill development is also problematic. While the model is capable of predicting rills in thalwegs, the modelling of erosion in tractor tracks and headlands is more complicated. In order to

  5. Real-world effects of medications for chronic obstructive pulmonary disease: protocol for a UK population-based non-interventional cohort study with validation against randomised trial results.

    PubMed

    Wing, Kevin; Williamson, Elizabeth; Carpenter, James R; Wise, Lesley; Schneeweiss, Sebastian; Smeeth, Liam; Quint, Jennifer K; Douglas, Ian

    2018-03-25

    Chronic obstructive pulmonary disease (COPD) is a progressive disease affecting 3 million people in the UK, in which patients exhibit airflow obstruction that is not fully reversible. COPD treatment guidelines are largely informed by randomised controlled trial results, but it is unclear if these findings apply to large patient populations not studied in trials. Non-interventional studies could be used to study patient groups excluded from trials, but the use of these studies to estimate treatment effectiveness is in its infancy. In this study, we will use individual trial data to validate non-interventional methods for assessing COPD treatment effectiveness, before applying these methods to the analysis of treatment effectiveness within people excluded from, or under-represented in COPD trials. Using individual patient data from the landmark COPD Towards a Revolution in COPD Health (TORCH) trial and validated methods for detecting COPD and exacerbations in routinely collected primary care data, we will assemble a cohort in the UK Clinical Practice Research Datalink (selecting people between 1 January 2004 and 1 January 2017) with similar characteristics to TORCH participants and test whether non-interventional data can generate comparable results to trials, using cohort methodology with propensity score techniques to adjust for potential confounding. We will then use the methodological template we have developed to determine risks and benefits of COPD treatments in people excluded from TORCH. Outcomes are pneumonia, COPD exacerbation, mortality and time to treatment change. Groups to be studied include the elderly (>80 years), people with substantial comorbidity, people with and without underlying cardiovascular disease and people with mild COPD. Ethical approval has been granted by the London School of Hygiene & Tropical Medicine Ethics Committee (Ref: 11997). The study has been approved by the Independent Scientific Advisory Committee of the UK Medicines and

  6. Comparing the Construct and Criterion-Related Validity of Ability-Based and Mixed-Model Measures of Emotional Intelligence

    ERIC Educational Resources Information Center

    Livingstone, Holly A.; Day, Arla L.

    2005-01-01

    Despite the popularity of the concept of emotional intelligence(EI), there is much controversy around its definition, measurement, and validity. Therefore, the authors examined the construct and criterion-related validity of an ability-based EI measure (Mayer Salovey Caruso Emotional Intelligence Test [MSCEIT]) and a mixed-model EI measure…

  7. A competency based selection procedure for Dutch postgraduate GP training: a pilot study on validity and reliability.

    PubMed

    Vermeulen, Margit I; Tromp, Fred; Zuithoff, Nicolaas P A; Pieters, Ron H M; Damoiseaux, Roger A M J; Kuyvenhoven, Marijke M

    2014-12-01

    Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the competencies that are required to complete GP training. The objective was to explore reliability and validity aspects of the instruments developed. The new selection procedure comprising the National GP Knowledge Test (LHK), a situational judgement tests (SJT), a patterned behaviour descriptive interview (PBDI) and a simulated encounter (SIM) was piloted alongside the current procedure. Forty-seven candidates volunteered in both procedures. Admission decision was based on the results of the current procedure. Study participants did hardly differ from the other candidates. The mean scores of the candidates on the LHK and SJT were 21.9 % (SD 8.7) and 83.8% (SD 3.1), respectively. The mean self-reported competency scores (PBDI) were higher than the observed competencies (SIM): 3.7(SD 0.5) and 2.9(SD 0.6), respectively. Content-related competencies showed low correlations with one another when measured with different instruments, whereas more diverse competencies measured by a single instrument showed strong to moderate correlations. Moreover, a moderate correlation between LHK and SJT was found. The internal consistencies (intraclass correlation, ICC) of LHK and SJT were poor while the ICC of PBDI and SIM showed acceptable levels of reliability. Findings on content validity and reliability of these new instruments are promising to realize a competency based procedure. Further development of the instruments and research on predictive validity should be pursued.

  8. Generalization of Selection Test Validity.

    ERIC Educational Resources Information Center

    Colbert, G. A.; Taylor, L. R.

    1978-01-01

    This is part three of a three-part series concerned with the empirical development of homogeneous families of insurance company jobs based on data from the Position Analysis Questionnaire (PAQ). This part involves validity generalizations within the job families which resulted from the previous research. (Editor/RK)

  9. Emotional and tangible social support in a German population-based sample: Development and validation of the Brief Social Support Scale (BS6).

    PubMed

    Beutel, Manfred E; Brähler, Elmar; Wiltink, Jörg; Michal, Matthias; Klein, Eva M; Jünger, Claus; Wild, Philipp S; Münzel, Thomas; Blettner, Maria; Lackner, Karl; Nickels, Stefan; Tibubos, Ana N

    2017-01-01

    Aim of the study was the development and validation of the psychometric properties of a six-item bi-factorial instrument for the assessment of social support (emotional and tangible support) with a population-based sample. A cross-sectional data set of N = 15,010 participants enrolled in the Gutenberg Health Study (GHS) in 2007-2012 was divided in two sub-samples. The GHS is a population-based, prospective, observational single-center cohort study in the Rhein-Main-Region in western Mid-Germany. The first sub-sample was used for scale development by performing an exploratory factor analysis. In order to test construct validity, confirmatory factor analyses were run to compare the extracted bi-factorial model with the one-factor solution. Reliability of the scales was indicated by calculating internal consistency. External validity was tested by investigating demographic characteristics health behavior, and distress using analysis of variance, Spearman and Pearson correlation analysis, and logistic regression analysis. Based on an exploratory factor analysis, a set of six items was extracted representing two independent factors. The two-factor structure of the Brief Social Support Scale (BS6) was confirmed by the results of the confirmatory factor analyses. Fit indices of the bi-factorial model were good and better compared to the one-factor solution. External validity was demonstrated for the BS6. The BS6 is a reliable and valid short scale that can be applied in social surveys due to its brevity to assess emotional and practical dimensions of social support.

  10. Reconsidering vocational interests for personnel selection: the validity of an interest-based selection test in relation to job knowledge, job performance, and continuance intentions.

    PubMed

    Van Iddekinge, Chad H; Putka, Dan J; Campbell, John P

    2011-01-01

    Although vocational interests have a long history in vocational psychology, they have received extremely limited attention within the recent personnel selection literature. We reconsider some widely held beliefs concerning the (low) validity of interests for predicting criteria important to selection researchers, and we review theory and empirical evidence that challenge such beliefs. We then describe the development and validation of an interests-based selection measure. Results of a large validation study (N = 418) reveal that interests predicted a diverse set of criteria—including measures of job knowledge, job performance, and continuance intentions—with corrected, cross-validated Rs that ranged from .25 to .46 across the criteria (mean R = .31). Interests also provided incremental validity beyond measures of general cognitive aptitude and facets of the Big Five personality dimensions in relation to each criterion. Furthermore, with a couple exceptions, the interest scales were associated with small to medium subgroup differences, which in most cases favored women and racial minorities. Taken as a whole, these results appear to call into question the prevailing thought that vocational interests have limited usefulness for selection.

  11. Neural Network-Based Sensor Validation for Turboshaft Engines

    NASA Technical Reports Server (NTRS)

    Moller, James C.; Litt, Jonathan S.; Guo, Ten-Huei

    1998-01-01

    Sensor failure detection, isolation, and accommodation using a neural network approach is described. An auto-associative neural network is configured to perform dimensionality reduction on the sensor measurement vector and provide estimated sensor values. The sensor validation scheme is applied in a simulation of the T700 turboshaft engine in closed loop operation. Performance is evaluated based on the ability to detect faults correctly and maintain stable and responsive engine operation. The set of sensor outputs used for engine control forms the network input vector. Analytical redundancy is verified by training networks of successively smaller bottleneck layer sizes. Training data generation and strategy are discussed. The engine maintained stable behavior in the presence of sensor hard failures. With proper selection of fault determination thresholds, stability was maintained in the presence of sensor soft failures.

  12. Brazilian Center for the Validation of Alternative Methods (BraCVAM) and the process of validation in Brazil.

    PubMed

    Presgrave, Octavio; Moura, Wlamir; Caldeira, Cristiane; Pereira, Elisabete; Bôas, Maria H Villas; Eskes, Chantra

    2016-03-01

    The need for the creation of a Brazilian centre for the validation of alternative methods was recognised in 2008, and members of academia, industry and existing international validation centres immediately engaged with the idea. In 2012, co-operation between the Oswaldo Cruz Foundation (FIOCRUZ) and the Brazilian Health Surveillance Agency (ANVISA) instigated the establishment of the Brazilian Center for the Validation of Alternative Methods (BraCVAM), which was officially launched in 2013. The Brazilian validation process follows OECD Guidance Document No. 34, where BraCVAM functions as the focal point to identify and/or receive requests from parties interested in submitting tests for validation. BraCVAM then informs the Brazilian National Network on Alternative Methods (RENaMA) of promising assays, which helps with prioritisation and contributes to the validation studies of selected assays. A Validation Management Group supervises the validation study, and the results obtained are peer-reviewed by an ad hoc Scientific Review Committee, organised under the auspices of BraCVAM. Based on the peer-review outcome, BraCVAM will prepare recommendations on the validated test method, which will be sent to the National Council for the Control of Animal Experimentation (CONCEA). CONCEA is in charge of the regulatory adoption of all validated test methods in Brazil, following an open public consultation. 2016 FRAME.

  13. A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birchfield, Adam; Schweitzer, Eran; Athari, Mir

    Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less

  14. Distribution and Validation of CERES Irradiance Global Data Products Via Web Based Tools

    NASA Technical Reports Server (NTRS)

    Rutan, David; Mitrescu, Cristian; Doelling, David; Kato, Seiji

    2016-01-01

    The CERES SYN1deg product provides climate quality 3-hourly globally gridded and temporally complete maps of top of atmosphere, in atmosphere, and surface fluxes. This product requires efficient release to the public and validation to maintain quality assurance. The CERES team developed web-tools for the distribution of both the global gridded products and grid boxes that contain long term validation sites that maintain high quality flux observations at the Earth's surface. These are found at: http://ceres.larc.nasa.gov/order_data.php. In this poster we explore the various tools available to users to sub-set, download, and validate using surface observations the SYN1Deg and Surface-EBAF products. We also analyze differences found in long-term records from well-maintained land surface sites such as the ARM central facility and high quality buoy radiometers, which due to their isolated nature cannot be maintained in a similar manner to their land based counterparts.

  15. A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

    DOE PAGES

    Birchfield, Adam; Schweitzer, Eran; Athari, Mir; ...

    2017-08-19

    Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less

  16. Validation of Regression-Based Myogenic Correction Techniques for Scalp and Source-Localized EEG

    PubMed Central

    McMenamin, Brenton W.; Shackman, Alexander J.; Maxwell, Jeffrey S.; Greischar, Lawrence L.; Davidson, Richard J.

    2008-01-01

    EEG and EEG source-estimation are susceptible to electromyographic artifacts (EMG) generated by the cranial muscles. EMG can mask genuine effects or masquerade as a legitimate effect - even in low frequencies, such as alpha (8–13Hz). Although regression-based correction has been used previously, only cursory attempts at validation exist and the utility for source-localized data is unknown. To address this, EEG was recorded from 17 participants while neurogenic and myogenic activity were factorially varied. We assessed the sensitivity and specificity of four regression-based techniques: between-subjects, between-subjects using difference-scores, within-subjects condition-wise, and within-subject epoch-wise on the scalp and in data modeled using the LORETA algorithm. Although within-subject epoch-wise showed superior performance on the scalp, no technique succeeded in the source-space. Aside from validating the novel epoch-wise methods on the scalp, we highlight methods requiring further development. PMID:19298626

  17. Validation of a Performance Assessment Instrument in Problem-Based Learning Tutorials Using Two Cohorts of Medical Students

    ERIC Educational Resources Information Center

    Lee, Ming; Wimmers, Paul F.

    2016-01-01

    Although problem-based learning (PBL) has been widely used in medical schools, few studies have attended to the assessment of PBL processes using validated instruments. This study examined reliability and validity for an instrument assessing PBL performance in four domains: Problem Solving, Use of Information, Group Process, and Professionalism.…

  18. Development and Technical Validation of the Mobile Based Assistive Listening System: A Smartphone-Based Remote Microphone.

    PubMed

    Lopez, Esteban Alejandro; Costa, Orozimbo Alves; Ferrari, Deborah Viviane

    2016-10-01

    The purpose of this research note is to describe the development and technical validation of the Mobile Based Assistive Listening System (MoBALS), a free-of-charge smartphone-based remote microphone application. MoBALS Version 1.0 was developed for Android (Version 2.1 or higher) and was coded with Java using Eclipse Indigo with the Android Software Development Kit. A Wi-Fi router with background traffic and 2 affordable smartphones were used for debugging and technical validation comprising, among other things, multicasting capability, data packet loss, and battery consumption. MoBALS requires at least 2 smartphones connected to the same Wi-Fi router for signal transmission and reception. Subscriber identity module cards or Internet connections are not needed. MoBALS can be used alone or connected to a hearing aid or cochlear implant via direct audio input. Maximum data packet loss was 99.28%, and minimum battery life was 5 hr. Other relevant design specifications and their implementation are described. MoBALS performed as a remote microphone with enhanced accessibility features and avoids overhead expenses by using already-available and affordable technology. The further development and technical revalidation of MoBALS will be followed by clinical evaluation with persons with hearing impairment.

  19. Malaria surveys using rapid diagnostic tests and validation of results using post hoc quantification of Plasmodium falciparum histidine-rich protein 2.

    PubMed

    Plucinski, Mateusz; Dimbu, Rafael; Candrinho, Baltazar; Colborn, James; Badiane, Aida; Ndiaye, Daouda; Mace, Kimberly; Chang, Michelle; Lemoine, Jean F; Halsey, Eric S; Barnwell, John W; Udhayakumar, Venkatachalam; Aidoo, Michael; Rogier, Eric

    2017-11-07

    Rapid diagnostic test (RDT) positivity is supplanting microscopy as the standard measure of malaria burden at the population level. However, there is currently no standard for externally validating RDT results from field surveys. Individuals' blood concentration of the Plasmodium falciparum histidine rich protein 2 (HRP2) protein were compared to results of HRP2-detecting RDTs in participants from field surveys in Angola, Mozambique, Haiti, and Senegal. A logistic regression model was used to estimate the HRP2 concentrations corresponding to the 50 and 90% level of detection (LOD) specific for each survey. There was a sigmoidal dose-response relationship between HRP2 concentration and RDT positivity for all surveys. Variation was noted in estimates for field RDT sensitivity, with the 50% LOD ranging between 0.076 and 6.1 ng/mL and the 90% LOD ranging between 1.1 and 53 ng/mL. Surveys conducted in two different provinces of Angola using the same brand of RDT and same study methodology showed a threefold difference in LOD. Measures of malaria prevalence estimated using population RDT positivity should be interpreted in the context of potentially large variation in RDT LODs between, and even within, surveys. Surveys based on RDT positivity would benefit from external validation of field RDT results by comparing RDT positivity and antigen concentration.

  20. Towards development and validation of an intraoperative assessment tool for robot-assisted radical prostatectomy training: results of a Delphi study.

    PubMed

    Morris, Christopher; Hoogenes, Jen; Shayegan, Bobby; Matsumoto, Edward D

    2017-01-01

    As urology training shifts toward competency-based frameworks, the need for tools for high stakes assessment of trainees is crucial. Validated assessment metrics are lacking for many robot-assisted radical prostatectomy (RARP). As it is quickly becoming the gold standard for treatment of localized prostate cancer, the development and validation of a RARP assessment tool for training is timely. We recruited 13 expert RARP surgeons from the United States and Canada to serve as our Delphi panel. Using an initial inventory developed via a modified Delphi process with urology residents, fellows, and staff at our institution, panelists iteratively rated each step and sub-step on a 5-point Likert scale of agreement for inclusion in the final assessment tool. Qualitative feedback was elicited for each item to determine proper step placement, wording, and suggestions. Panelist's responses were compiled and the inventory was edited through three iterations, after which 100% consensus was achieved. The initial inventory steps were decreased by 13% and a skip pattern was incorporated. The final RARP stepwise inventory was comprised of 13 critical steps with 52 sub-steps. There was no attrition throughout the Delphi process. Our Delphi study resulted in a comprehensive inventory of intraoperative RARP steps with excellent consensus. This final inventory will be used to develop a valid and psychometrically sound intraoperative assessment tool for use during RARP training and evaluation, with the aim of increasing competency of all trainees. Copyright® by the International Brazilian Journal of Urology.

  1. Accounting for Test Variability through Sizing Local Domains in Sequential Design Optimization with Concurrent Calibration-Based Model Validation

    DTIC Science & Technology

    2013-08-01

    in Sequential Design Optimization with Concurrent Calibration-Based Model Validation Dorin Drignei 1 Mathematics and Statistics Department...Validation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Dorin Drignei; Zissimos Mourelatos; Vijitashwa Pandey

  2. Validity and reliability of criterion based clinical audit to assess obstetrical quality of care in West Africa.

    PubMed

    Pirkle, Catherine M; Dumont, Alexandre; Traore, Mamadou; Zunzunegui, Maria-Victoria

    2012-10-29

    In Mali and Senegal, over 1% of women die giving birth in hospital. At some hospitals, over a third of infants are stillborn. Many deaths are due to substandard medical practices. Criterion-based clinical audits (CBCA) are increasingly used to measure and improve obstetrical care in resource-limited settings, but their measurement properties have not been formally evaluated. In 2011, we published a systematic review of obstetrical CBCA highlighting insufficient considerations of validity and reliability. The objective of this study is to develop an obstetrical CBCA adapted to the West African context and assess its reliability and validity. This work was conducted as a sub-study within a cluster randomized trial known as QUARITE. Criteria were selected based on extensive literature review and expert opinion. Early 2010, two auditors applied the CBCA to identical samples at 8 sites in Mali and Senegal (n = 185) to evaluate inter-rater reliability. In 2010-11, we conducted CBCA at 32 hospitals to assess construct validity (n = 633 patients). We correlated hospital characteristics (resource availability, facility perinatal and maternal mortality) with mean hospital CBCA scores. We used generalized estimating equations to assess whether patient CBCA scores were associated with perinatal mortality. Results demonstrate substantial (ICC = 0.67, 95% CI 0.54; 0.76) to elevated inter-rater reliability (ICC = 0.84, 95% CI 0.77; 0.89) in Senegal and Mali, respectively. Resource availability positively correlated with mean hospital CBCA scores and maternal and perinatal mortality were inversely correlated with hospital CBCA scores. Poor CBCA scores, adjusted for hospital and patient characteristics, were significantly associated with perinatal mortality (OR 1.84, 95% CI 1.01-3.34). Our CBCA has substantial inter-rater reliability and there is compelling evidence of its validity as the tool performs according to theory. Current Controlled Trials ISRCTN46950658.

  3. The development and validity of the Salford Gait Tool: an observation-based clinical gait assessment tool.

    PubMed

    Toro, Brigitte; Nester, Christopher J; Farren, Pauline C

    2007-03-01

    To develop the construct, content, and criterion validity of the Salford Gait Tool (SF-GT) and to evaluate agreement between gait observations using the SF-GT and kinematic gait data. Tool development and comparative evaluation. University in the United Kingdom. For designing construct and content validity, convenience samples of 10 children with hemiplegic, diplegic, and quadriplegic cerebral palsy (CP) and 152 physical therapy students and 4 physical therapists were recruited. For developing criterion validity, kinematic gait data of 13 gait clusters containing 56 children with hemiplegic, diplegic, and quadriplegic CP and 11 neurologically intact children was used. For clinical evaluation, a convenience sample of 23 pediatric physical therapists participated. We developed a sagittal plane observational gait assessment tool through a series of design, test, and redesign iterations. The tool's grading system was calibrated using kinematic gait data of 13 gait clusters and was evaluated by comparing the agreement of gait observations using the SF-GT with kinematic gait data. Criterion standard kinematic gait data. There was 58% mean agreement based on grading categories and 80% mean agreement based on degree estimations evaluated with the least significant difference method. The new SF-GT has good concurrent criterion validity.

  4. Development and validation of a web-based questionnaire for surveying the health and working conditions of high-performance marine craft populations.

    PubMed

    de Alwis, Manudul Pahansen; Lo Martire, Riccardo; Äng, Björn O; Garme, Karl

    2016-06-20

    High-performance marine craft crews are susceptible to various adverse health conditions caused by multiple interactive factors. However, there are limited epidemiological data available for assessment of working conditions at sea. Although questionnaire surveys are widely used for identifying exposures, outcomes and associated risks with high accuracy levels, until now, no validated epidemiological tool exists for surveying occupational health and performance in these populations. To develop and validate a web-based questionnaire for epidemiological assessment of occupational and individual risk exposure pertinent to the musculoskeletal health conditions and performance in high-performance marine craft populations. A questionnaire for investigating the association between work-related exposure, performance and health was initially developed by a consensus panel under four subdomains, viz. demography, lifestyle, work exposure and health and systematically validated by expert raters for content relevance and simplicity in three consecutive stages, each iteratively followed by a consensus panel revision. The item content validity index (I-CVI) was determined as the proportion of experts giving a rating of 3 or 4. The scale content validity index (S-CVI/Ave) was computed by averaging the I-CVIs for the assessment of the questionnaire as a tool. Finally, the questionnaire was pilot tested. The S-CVI/Ave increased from 0.89 to 0.96 for relevance and from 0.76 to 0.94 for simplicity, resulting in 36 items in the final questionnaire. The pilot test confirmed the feasibility of the questionnaire. The present study shows that the web-based questionnaire fulfils previously published validity acceptance criteria and is therefore considered valid and feasible for the empirical surveying of epidemiological aspects among high-performance marine craft crews and similar populations. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted

  5. Hierarchical calibration and validation for modeling bench-scale solvent-based carbon capture. Part 1: Non-reactive physical mass transfer across the wetted wall column: Original Research Article: Hierarchical calibration and validation for modeling bench-scale solvent-based carbon capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Canhai

    A hierarchical model calibration and validation is proposed for quantifying the confidence level of mass transfer prediction using a computational fluid dynamics (CFD) model, where the solvent-based carbon dioxide (CO2) capture is simulated and simulation results are compared to the parallel bench-scale experimental data. Two unit problems with increasing level of complexity are proposed to breakdown the complex physical/chemical processes of solvent-based CO2 capture into relatively simpler problems to separate the effects of physical transport and chemical reaction. This paper focuses on the calibration and validation of the first unit problem, i.e. the CO2 mass transfer across a falling ethanolaminemore » (MEA) film in absence of chemical reaction. This problem is investigated both experimentally and numerically using nitrous oxide (N2O) as a surrogate for CO2. To capture the motion of gas-liquid interface, a volume of fluid method is employed together with a one-fluid formulation to compute the mass transfer between the two phases. Bench-scale parallel experiments are designed and conducted to validate and calibrate the CFD models using a general Bayesian calibration. Two important transport parameters, e.g. Henry’s constant and gas diffusivity, are calibrated to produce the posterior distributions, which will be used as the input for the second unit problem to address the chemical adsorption of CO2 across the MEA falling film, where both mass transfer and chemical reaction are involved.« less

  6. Reaction time as an indicator of insufficient effort: Development and validation of an embedded performance validity parameter.

    PubMed

    Stevens, Andreas; Bahlo, Simone; Licha, Christina; Liske, Benjamin; Vossler-Thies, Elisabeth

    2016-11-30

    Subnormal performance in attention tasks may result from various sources including lack of effort. In this report, the derivation and validation of a performance validity parameter for reaction time is described, using a set of malingering-indices ("Slick-criteria"), and 3 independent samples of participants (total n =893). The Slick-criteria yield an estimate of the probability of malingering based on the presence of an external incentive, evidence from neuropsychological testing, from self-report and clinical data. In study (1) a validity parameter is derived using reaction time data of a sample, composed of inpatients with recent severe brain lesions not involved in litigation and of litigants with and without brain lesion. In study (2) the validity parameter is tested in an independent sample of litigants. In study (3) the parameter is applied to an independent sample comprising cooperative and non-cooperative testees. Logistic regression analysis led to a derived validity parameter based on median reaction time and standard deviation. It performed satisfactorily in studies (2) and (3) (study 2 sensitivity=0.94, specificity=1.00; study 3 sensitivity=0.79, specificity=0.87). The findings suggest that median reaction time and standard deviation may be used as indicators of negative response bias. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Performance Evaluation of a Data Validation System

    NASA Technical Reports Server (NTRS)

    Wong, Edmond (Technical Monitor); Sowers, T. Shane; Santi, L. Michael; Bickford, Randall L.

    2005-01-01

    Online data validation is a performance-enhancing component of modern control and health management systems. It is essential that performance of the data validation system be verified prior to its use in a control and health management system. A new Data Qualification and Validation (DQV) Test-bed application was developed to provide a systematic test environment for this performance verification. The DQV Test-bed was used to evaluate a model-based data validation package known as the Data Quality Validation Studio (DQVS). DQVS was employed as the primary data validation component of a rocket engine health management (EHM) system developed under NASA's NGLT (Next Generation Launch Technology) program. In this paper, the DQVS and DQV Test-bed software applications are described, and the DQV Test-bed verification procedure for this EHM system application is presented. Test-bed results are summarized and implications for EHM system performance improvements are discussed.

  8. Development and Validation of the Evidence-Based Practice Process Assessment Scale: Preliminary Findings

    ERIC Educational Resources Information Center

    Rubin, Allen; Parrish, Danielle E.

    2010-01-01

    Objective: This report describes the development and preliminary findings regarding the reliability, validity, and sensitivity of a scale that has been developed to assess practitioners' perceived familiarity with, attitudes about, and implementation of the phases of the evidence-based practice (EBP) process. Method: After a panel of national…

  9. Validation of a GPU-based Monte Carlo code (gPMC) for proton radiation therapy: clinical cases study.

    PubMed

    Giantsoudi, Drosoula; Schuemann, Jan; Jia, Xun; Dowdell, Stephen; Jiang, Steve; Paganetti, Harald

    2015-03-21

    Monte Carlo (MC) methods are recognized as the gold-standard for dose calculation, however they have not replaced analytical methods up to now due to their lengthy calculation times. GPU-based applications allow MC dose calculations to be performed on time scales comparable to conventional analytical algorithms. This study focuses on validating our GPU-based MC code for proton dose calculation (gPMC) using an experimentally validated multi-purpose MC code (TOPAS) and compare their performance for clinical patient cases. Clinical cases from five treatment sites were selected covering the full range from very homogeneous patient geometries (liver) to patients with high geometrical complexity (air cavities and density heterogeneities in head-and-neck and lung patients) and from short beam range (breast) to large beam range (prostate). Both gPMC and TOPAS were used to calculate 3D dose distributions for all patients. Comparisons were performed based on target coverage indices (mean dose, V95, D98, D50, D02) and gamma index distributions. Dosimetric indices differed less than 2% between TOPAS and gPMC dose distributions for most cases. Gamma index analysis with 1%/1 mm criterion resulted in a passing rate of more than 94% of all patient voxels receiving more than 10% of the mean target dose, for all patients except for prostate cases. Although clinically insignificant, gPMC resulted in systematic underestimation of target dose for prostate cases by 1-2% compared to TOPAS. Correspondingly the gamma index analysis with 1%/1 mm criterion failed for most beams for this site, while for 2%/1 mm criterion passing rates of more than 94.6% of all patient voxels were observed. For the same initial number of simulated particles, calculation time for a single beam for a typical head and neck patient plan decreased from 4 CPU hours per million particles (2.8-2.9 GHz Intel X5600) for TOPAS to 2.4 s per million particles (NVIDIA TESLA C2075) for gPMC. Excellent agreement was

  10. Exploring geo-tagged photos for land cover validation with deep learning

    NASA Astrophysics Data System (ADS)

    Xing, Hanfa; Meng, Yuan; Wang, Zixuan; Fan, Kaixuan; Hou, Dongyang

    2018-07-01

    Land cover validation plays an important role in the process of generating and distributing land cover thematic maps, which is usually implemented by high cost of sample interpretation with remotely sensed images or field survey. With an increasing availability of geo-tagged landscape photos, the automatic photo recognition methodologies, e.g., deep learning, can be effectively utilised for land cover applications. However, they have hardly been utilised in validation processes, as challenges remain in sample selection and classification for highly heterogeneous photos. This study proposed an approach to employ geo-tagged photos for land cover validation by using the deep learning technology. The approach first identified photos automatically based on the VGG-16 network. Then, samples for validation were selected and further classified by considering photos distribution and classification probabilities. The implementations were conducted for the validation of the GlobeLand30 land cover product in a heterogeneous area, western California. Experimental results represented promises in land cover validation, given that GlobeLand30 showed an overall accuracy of 83.80% with classified samples, which was close to the validation result of 80.45% based on visual interpretation. Additionally, the performances of deep learning based on ResNet-50 and AlexNet were also quantified, revealing no substantial differences in final validation results. The proposed approach ensures geo-tagged photo quality, and supports the sample classification strategy by considering photo distribution, with accuracy improvement from 72.07% to 79.33% compared with solely considering the single nearest photo. Consequently, the presented approach proves the feasibility of deep learning technology on land cover information identification of geo-tagged photos, and has a great potential to support and improve the efficiency of land cover validation.

  11. Computational Design and Discovery of Ni-Based Alloys and Coatings: Thermodynamic Approaches Validated by Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zi-Kui; Gleeson, Brian; Shang, Shunli

    This project developed computational tools that can complement and support experimental efforts in order to enable discovery and more efficient development of Ni-base structural materials and coatings. The project goal was reached through an integrated computation-predictive and experimental-validation approach, including first-principles calculations, thermodynamic CALPHAD (CALculation of PHAse Diagram), and experimental investigations on compositions relevant to Ni-base superalloys and coatings in terms of oxide layer growth and microstructure stabilities. The developed description included composition ranges typical for coating alloys and, hence, allow for prediction of thermodynamic properties for these material systems. The calculation of phase compositions, phase fraction, and phase stabilities,more » which are directly related to properties such as ductility and strength, was a valuable contribution, along with the collection of computational tools that are required to meet the increasing demands for strong, ductile and environmentally-protective coatings. Specifically, a suitable thermodynamic description for the Ni-Al-Cr-Co-Si-Hf-Y system was developed for bulk alloy and coating compositions. Experiments were performed to validate and refine the thermodynamics from the CALPHAD modeling approach. Additionally, alloys produced using predictions from the current computational models were studied in terms of their oxidation performance. Finally, results obtained from experiments aided in the development of a thermodynamic modeling automation tool called ESPEI/pycalphad - for more rapid discovery and development of new materials.« less

  12. Using Web-Based Questionnaires and Obstetric Records to Assess General Health Characteristics Among Pregnant Women: A Validation Study

    PubMed Central

    Schouten, Naomi PE; Merkus, Peter JFM; Verhaak, Chris M; Roeleveld, Nel; Roukema, Jolt

    2015-01-01

    Background Self-reported medical history information is included in many studies. However, data on the validity of Web-based questionnaires assessing medical history are scarce. If proven to be valid, Web-based questionnaires may provide researchers with an efficient means to collect data on this parameter in large populations. Objective The aim of this study was to assess the validity of a Web-based questionnaire on chronic medical conditions, allergies, and blood pressure readings against obstetric records and data from general practitioners. Methods Self-reported questionnaire data were compared with obstetric records for 519 pregnant women participating in the Dutch PRegnancy and Infant DEvelopment (PRIDE) Study from July 2011 through November 2012. These women completed Web-based questionnaires around their first prenatal care visit and in gestational weeks 17 and 34. We calculated kappa statistics (κ) and the observed proportions of positive and negative agreement between the baseline questionnaire and obstetric records for chronic conditions and allergies. In case of inconsistencies between these 2 data sources, medical records from the woman’s general practitioner were consulted as the reference standard. For systolic and diastolic blood pressure, intraclass correlation coefficients (ICCs) were calculated for multiple data points. Results Agreement between the baseline questionnaire and the obstetric record was substantial (κ=.61) for any chronic condition and moderate for any allergy (κ=.51). For specific conditions, we found high observed proportions of negative agreement (range 0.88-1.00) and on average moderate observed proportions of positive agreement with a wide range (range 0.19-0.90). Using the reference standard, the sensitivity of the Web-based questionnaire for chronic conditions and allergies was comparable to or even better than the sensitivity of the obstetric records, in particular for migraine (0.90 vs 0.40, P=.02), asthma (0.86 vs 0

  13. Validity and power of association testing in family-based sampling designs: evidence for and against the common wisdom.

    PubMed

    Knight, Stacey; Camp, Nicola J

    2011-04-01

    Current common wisdom posits that association analyses using family-based designs have inflated type 1 error rates (if relationships are ignored) and independent controls are more powerful than familial controls. We explore these suppositions. We show theoretically that family-based designs can have deflated type-error rates. Through simulation, we examine the validity and power of family designs for several scenarios: cases from randomly or selectively ascertained pedigrees; and familial or independent controls. Family structures considered are as follows: sibships, nuclear families, moderate-sized and extended pedigrees. Three methods were considered with the χ(2) test for trend: variance correction (VC), weighted (weights assigned to account for genetic similarity), and naïve (ignoring relatedness) as well as the Modified Quasi-likelihood Score (MQLS) test. Selectively ascertained pedigrees had similar levels of disease enrichment; random ascertainment had no such restriction. Data for 1,000 cases and 1,000 controls were created under the null and alternate models. The VC and MQLS methods were always valid. The naïve method was anti-conservative if independent controls were used and valid or conservative in designs with familial controls. The weighted association method was generally valid for independent controls, and was conservative for familial controls. With regard to power, independent controls were more powerful for small-to-moderate selectively ascertained pedigrees, but familial and independent controls were equivalent in the extended pedigrees and familial controls were consistently more powerful for all randomly ascertained pedigrees. These results suggest a more complex situation than previously assumed, which has important implications for study design and analysis. © 2011 Wiley-Liss, Inc.

  14. Examining students' views about validity of experiments: From introductory to Ph.D. students

    NASA Astrophysics Data System (ADS)

    Hu, Dehui; Zwickl, Benjamin M.

    2018-06-01

    We investigated physics students' epistemological views on measurements and validity of experimental results. The roles of experiments in physics have been underemphasized in previous research on students' personal epistemology, and there is a need for a broader view of personal epistemology that incorporates experiments. An epistemological framework incorporating the structure, methodology, and validity of scientific knowledge guided the development of an open-ended survey. The survey was administered to students in algebra-based and calculus-based introductory physics courses, upper-division physics labs, and physics Ph.D. students. Within our sample, we identified several differences in students' ideas about validity and uncertainty in measurement. The majority of introductory students justified the validity of results through agreement with theory or with results from others. Alternatively, Ph.D. students frequently justified the validity of results based on the quality of the experimental process and repeatability of results. When asked about the role of uncertainty analysis, introductory students tended to focus on the representational roles (e.g., describing imperfections, data variability, and human mistakes). However, advanced students focused on the inferential roles of uncertainty analysis (e.g., quantifying reliability, making comparisons, and guiding refinements). The findings suggest that lab courses could emphasize a variety of approaches to establish validity, such as by valuing documentation of the experimental process when evaluating the quality of student work. In order to emphasize the role of uncertainty in an authentic way, labs could provide opportunities to iterate, make repeated comparisons, and make decisions based on those comparisons.

  15. Validity And Practicality of Experiment Integrated Guided Inquiry-Based Module on Topic of Colloidal Chemistry for Senior High School Learning

    NASA Astrophysics Data System (ADS)

    Andromeda, A.; Lufri; Festiyed; Ellizar, E.; Iryani, I.; Guspatni, G.; Fitri, L.

    2018-04-01

    This Research & Development study aims to produce a valid and practical experiment integrated guided inquiry based module on topic of colloidal chemistry. 4D instructional design model was selected in this study. Limited trial of the product was conducted at SMAN 7 Padang. Instruments used were validity and practicality questionnaires. Validity and practicality data were analyzed using Kappa moment. Analysis of the data shows that Kappa moment for validity was 0.88 indicating a very high degree of validity. Kappa moments for the practicality from students and teachers were 0.89 and 0.95 respectively indicating high degree of practicality. Analysis on the module filled in by students shows that 91.37% students could correctly answer critical thinking, exercise, prelab, postlab and worksheet questions asked in the module. These findings indicate that the integrated guided inquiry based module on topic of colloidal chemistry was valid and practical for chemistry learning in senior high school.

  16. Validation of a small-animal PET simulation using GAMOS: a GEANT4-based framework

    NASA Astrophysics Data System (ADS)

    Cañadas, M.; Arce, P.; Rato Mendes, P.

    2011-01-01

    Monte Carlo-based modelling is a powerful tool to help in the design and optimization of positron emission tomography (PET) systems. The performance of these systems depends on several parameters, such as detector physical characteristics, shielding or electronics, whose effects can be studied on the basis of realistic simulated data. The aim of this paper is to validate a comprehensive study of the Raytest ClearPET small-animal PET scanner using a new Monte Carlo simulation platform which has been developed at CIEMAT (Madrid, Spain), called GAMOS (GEANT4-based Architecture for Medicine-Oriented Simulations). This toolkit, based on the GEANT4 code, was originally designed to cover multiple applications in the field of medical physics from radiotherapy to nuclear medicine, but has since been applied by some of its users in other fields of physics, such as neutron shielding, space physics, high energy physics, etc. Our simulation model includes the relevant characteristics of the ClearPET system, namely, the double layer of scintillator crystals in phoswich configuration, the rotating gantry, the presence of intrinsic radioactivity in the crystals or the storage of single events for an off-line coincidence sorting. Simulated results are contrasted with experimental acquisitions including studies of spatial resolution, sensitivity, scatter fraction and count rates in accordance with the National Electrical Manufacturers Association (NEMA) NU 4-2008 protocol. Spatial resolution results showed a discrepancy between simulated and measured values equal to 8.4% (with a maximum FWHM difference over all measurement directions of 0.5 mm). Sensitivity results differ less than 1% for a 250-750 keV energy window. Simulated and measured count rates agree well within a wide range of activities, including under electronic saturation of the system (the measured peak of total coincidences, for the mouse-sized phantom, was 250.8 kcps reached at 0.95 MBq mL-1 and the simulated peak was

  17. Validity and reliability assessment of a peer evaluation method in team-based learning classes.

    PubMed

    Yoon, Hyun Bae; Park, Wan Beom; Myung, Sun-Jung; Moon, Sang Hui; Park, Jun-Bean

    2018-03-01

    Team-based learning (TBL) is increasingly employed in medical education because of its potential to promote active group learning. In TBL, learners are usually asked to assess the contributions of peers within their group to ensure accountability. The purpose of this study is to assess the validity and reliability of a peer evaluation instrument that was used in TBL classes in a single medical school. A total of 141 students were divided into 18 groups in 11 TBL classes. The students were asked to evaluate their peers in the group based on evaluation criteria that were provided to them. We analyzed the comments that were written for the highest and lowest achievers to assess the validity of the peer evaluation instrument. The reliability of the instrument was assessed by examining the agreement among peer ratings within each group of students via intraclass correlation coefficient (ICC) analysis. Most of the students provided reasonable and understandable comments for the high and low achievers within their group, and most of those comments were compatible with the evaluation criteria. The average ICC of each group ranged from 0.390 to 0.863, and the overall average was 0.659. There was no significant difference in inter-rater reliability according to the number of members in the group or the timing of the evaluation within the course. The peer evaluation instrument that was used in the TBL classes was valid and reliable. Providing evaluation criteria and rules seemed to improve the validity and reliability of the instrument.

  18. Validation of the PROMIS Physical Function Measures in a Diverse U.S. Population-Based Cohort of Cancer Patients

    PubMed Central

    Jensen, Roxanne E.; Potosky, Arnold L.; Reeve, Bryce B.; Hahn, Elizabeth; Cella, David; Fries, James; Smith, Ashley Wilder; Keegan, Theresa H.M.; Wu, Xiao-Cheng; Paddock, Lisa; Moinpour, Carol M.

    2016-01-01

    Purpose To evaluate the validity of the Patient-Reported Outcomes Measurement Information System (PROMIS) Physical Function measures in a diverse, population-based cancer sample. Methods Cancer patients 6–13 months post diagnosis (n=4,840) were recruited for the Measuring Your Health (MY-Health) study. Participants were diagnosed between 2010–2013 with non-Hodgkin lymphoma or cancers of the colorectum, lung, breast, uterus, cervix, or prostate. Four PROMIS Physical Function short forms (4a, 6b, 10a, and 16) were evaluated for validity and reliability across age and race-ethnicity groups. Covariates included gender, marital status, education level, cancer site and stage, comorbidities, and functional status. Results PROMIS Physical Function short forms showed high internal consistency (Cronbach’s α =0.92 – 0.96), convergent validity (Fatigue, Pain Interference, FACT Physical Well-Being all r≥0.68) and discriminant validity (unrelated domains all r≤0.3) across survey short forms, age, and race-ethnicity. Known group differences by demographic, clinical, and functional characteristics performed as hypothesized. Ceiling effects for higher-functioning individuals were identified on most forms. Conclusions This study provides strong evidence that PROMIS Physical Function measures are valid and reliable in multiple race-ethnicity and age groups. Researchers selecting specific PROMIS short forms should consider the degree of functional disability in their patient population to ensure that length and content are tailored to limit response burden. PMID:25935353

  19. CRISPR-Cas9-based target validation for p53-reactivating model compounds

    PubMed Central

    Wanzel, Michael; Vischedyk, Jonas B; Gittler, Miriam P; Gremke, Niklas; Seiz, Julia R; Hefter, Mirjam; Noack, Magdalena; Savai, Rajkumar; Mernberger, Marco; Charles, Joël P; Schneikert, Jean; Bretz, Anne Catherine; Nist, Andrea; Stiewe, Thorsten

    2015-01-01

    Inactivation of the p53 tumor suppressor by Mdm2 is one of the most frequent events in cancer, so compounds targeting the p53-Mdm2 interaction are promising for cancer therapy. Mechanisms conferring resistance to p53-reactivating compounds are largely unknown. Here we show using CRISPR-Cas9–based target validation in lung and colorectal cancer that the activity of nutlin, which blocks the p53-binding pocket of Mdm2, strictly depends on functional p53. In contrast, sensitivity to the drug RITA, which binds the Mdm2-interacting N terminus of p53, correlates with induction of DNA damage. Cells with primary or acquired RITA resistance display cross-resistance to DNA crosslinking compounds such as cisplatin and show increased DNA cross-link repair. Inhibition of FancD2 by RNA interference or pharmacological mTOR inhibitors restores RITA sensitivity. The therapeutic response to p53-reactivating compounds is therefore limited by compound-specific resistance mechanisms that can be resolved by CRISPR-Cas9-based target validation and should be considered when allocating patients to p53-reactivating treatments. PMID:26595461

  20. Valid methods: the quality assurance of test method development, validation, approval, and transfer for veterinary testing laboratories.

    PubMed

    Wiegers, Ann L

    2003-07-01

    Third-party accreditation is a valuable tool to demonstrate a laboratory's competence to conduct testing. Accreditation, internationally and in the United States, has been discussed previously. However, accreditation is only I part of establishing data credibility. A validated test method is the first component of a valid measurement system. Validation is defined as confirmation by examination and the provision of objective evidence that the particular requirements for a specific intended use are fulfilled. The international and national standard ISO/IEC 17025 recognizes the importance of validated methods and requires that laboratory-developed methods or methods adopted by the laboratory be appropriate for the intended use. Validated methods are therefore required and their use agreed to by the client (i.e., end users of the test results such as veterinarians, animal health programs, and owners). ISO/IEC 17025 also requires that the introduction of methods developed by the laboratory for its own use be a planned activity conducted by qualified personnel with adequate resources. This article discusses considerations and recommendations for the conduct of veterinary diagnostic test method development, validation, evaluation, approval, and transfer to the user laboratory in the ISO/IEC 17025 environment. These recommendations are based on those of nationally and internationally accepted standards and guidelines, as well as those of reputable and experienced technical bodies. They are also based on the author's experience in the evaluation of method development and transfer projects, validation data, and the implementation of quality management systems in the area of method development.

  1. Biological validation of physical coastal waters classification along the NE Atlantic region based on rocky macroalgae distribution

    NASA Astrophysics Data System (ADS)

    Ramos, Elvira; Puente, Araceli; Juanes, José Antonio; Neto, João M.; Pedersen, Are; Bartsch, Inka; Scanlan, Clare; Wilkes, Robert; Van den Bergh, Erika; Ar Gall, Erwan; Melo, Ricardo

    2014-06-01

    A methodology to classify rocky shores along the North East Atlantic (NEA) region was developed. Previously, biotypes and the variability of environmental conditions within these were recognized based on abiotic data. A biological validation was required in order to support the ecological meaning of the physical typologies obtained. A database of intertidal macroalgae species occurring in the coastal area between Norway and the South Iberian Peninsula was generated. Semi-quantitative abundance data of the most representative macroalgal taxa were collected in three levels: common, rare or absent. Ordination and classification multivariate analyses revealed a clear latitudinal gradient in the distribution of macroalgae species resulting in two distinct groups: one northern and one southern group, separated at the coast of Brittany (France). In general, the results based on biological data coincided with the results based on physical characteristics. The ecological meaning of the coastal waters classification at a broad scale shown in this work demonstrates that it can be valuable as a practical tool for conservation and management purposes.

  2. A Multiphase Validation of Atlas-Based Automatic and Semiautomatic Segmentation Strategies for Prostate MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Spencer; Rodrigues, George, E-mail: george.rodrigues@lhsc.on.ca; Department of Epidemiology/Biostatistics, University of Western Ontario, London

    2013-01-01

    Purpose: To perform a rigorous technological assessment and statistical validation of a software technology for anatomic delineations of the prostate on MRI datasets. Methods and Materials: A 3-phase validation strategy was used. Phase I consisted of anatomic atlas building using 100 prostate cancer MRI data sets to provide training data sets for the segmentation algorithms. In phase II, 2 experts contoured 15 new MRI prostate cancer cases using 3 approaches (manual, N points, and region of interest). In phase III, 5 new physicians with variable MRI prostate contouring experience segmented the same 15 phase II datasets using 3 approaches: manual,more » N points with no editing, and full autosegmentation with user editing allowed. Statistical analyses for time and accuracy (using Dice similarity coefficient) endpoints used traditional descriptive statistics, analysis of variance, analysis of covariance, and pooled Student t test. Results: In phase I, average (SD) total and per slice contouring time for the 2 physicians was 228 (75), 17 (3.5), 209 (65), and 15 seconds (3.9), respectively. In phase II, statistically significant differences in physician contouring time were observed based on physician, type of contouring, and case sequence. The N points strategy resulted in superior segmentation accuracy when initial autosegmented contours were compared with final contours. In phase III, statistically significant differences in contouring time were observed based on physician, type of contouring, and case sequence again. The average relative timesaving for N points and autosegmentation were 49% and 27%, respectively, compared with manual contouring. The N points and autosegmentation strategies resulted in average Dice values of 0.89 and 0.88, respectively. Pre- and postedited autosegmented contours demonstrated a higher average Dice similarity coefficient of 0.94. Conclusion: The software provided robust contours with minimal editing required. Observed time savings

  3. Validation of a Low-Cost Paper-Based Screening Test for Sickle Cell Anemia

    PubMed Central

    Piety, Nathaniel Z.; Yang, Xiaoxi; Kanter, Julie; Vignes, Seth M.; George, Alex; Shevkoplyas, Sergey S.

    2016-01-01

    Background The high childhood mortality and life-long complications associated with sickle cell anemia (SCA) in developing countries could be significantly reduced with effective prophylaxis and education if SCA is diagnosed early in life. However, conventional laboratory methods used for diagnosing SCA remain prohibitively expensive and impractical in this setting. This study describes the clinical validation of a low-cost paper-based test for SCA that can accurately identify sickle trait carriers (HbAS) and individuals with SCA (HbSS) among adults and children over 1 year of age. Methods and Findings In a population of healthy volunteers and SCA patients in the United States (n = 55) the test identified individuals whose blood contained any HbS (HbAS and HbSS) with 100% sensitivity and 100% specificity for both visual evaluation and automated analysis, and detected SCA (HbSS) with 93% sensitivity and 94% specificity for visual evaluation and 100% sensitivity and 97% specificity for automated analysis. In a population of post-partum women (with a previously unknown SCA status) at a primary obstetric hospital in Cabinda, Angola (n = 226) the test identified sickle cell trait carriers with 94% sensitivity and 97% specificity using visual evaluation (none of the women had SCA). Notably, our test permits instrument- and electricity-free visual diagnostics, requires minimal training to be performed, can be completed within 30 minutes, and costs about $0.07 in test-specific consumable materials. Conclusions Our results validate the paper-based SCA test as a useful low-cost tool for screening adults and children for sickle trait and disease and demonstrate its practicality in resource-limited clinical settings. PMID:26735691

  4. Reliability and validity of the revised Gibson Test of Cognitive Skills, a computer-based test battery for assessing cognition across the lifespan.

    PubMed

    Moore, Amy Lawson; Miller, Terissa M

    2018-01-01

    The purpose of the current study is to evaluate the validity and reliability of the revised Gibson Test of Cognitive Skills, a computer-based battery of tests measuring short-term memory, long-term memory, processing speed, logic and reasoning, visual processing, as well as auditory processing and word attack skills. This study included 2,737 participants aged 5-85 years. A series of studies was conducted to examine the validity and reliability using the test performance of the entire norming group and several subgroups. The evaluation of the technical properties of the test battery included content validation by subject matter experts, item analysis and coefficient alpha, test-retest reliability, split-half reliability, and analysis of concurrent validity with the Woodcock Johnson III Tests of Cognitive Abilities and Tests of Achievement. Results indicated strong sources of evidence of validity and reliability for the test, including internal consistency reliability coefficients ranging from 0.87 to 0.98, test-retest reliability coefficients ranging from 0.69 to 0.91, split-half reliability coefficients ranging from 0.87 to 0.91, and concurrent validity coefficients ranging from 0.53 to 0.93. The Gibson Test of Cognitive Skills-2 is a reliable and valid tool for assessing cognition in the general population across the lifespan.

  5. Developing symptom-based predictive models of endometriosis as a clinical screening tool: results from a multicenter study

    PubMed Central

    Nnoaham, Kelechi E.; Hummelshoj, Lone; Kennedy, Stephen H.; Jenkinson, Crispin; Zondervan, Krina T.

    2012-01-01

    Objective To generate and validate symptom-based models to predict endometriosis among symptomatic women prior to undergoing their first laparoscopy. Design Prospective, observational, two-phase study, in which women completed a 25-item questionnaire prior to surgery. Setting Nineteen hospitals in 13 countries. Patient(s) Symptomatic women (n = 1,396) scheduled for laparoscopy without a previous surgical diagnosis of endometriosis. Intervention(s) None. Main Outcome Measure(s) Sensitivity and specificity of endometriosis diagnosis predicted by symptoms and patient characteristics from optimal models developed using multiple logistic regression analyses in one data set (phase I), and independently validated in a second data set (phase II) by receiver operating characteristic (ROC) curve analysis. Result(s) Three hundred sixty (46.7%) women in phase I and 364 (58.2%) in phase II were diagnosed with endometriosis at laparoscopy. Menstrual dyschezia (pain on opening bowels) and a history of benign ovarian cysts most strongly predicted both any and stage III and IV endometriosis in both phases. Prediction of any-stage endometriosis, although improved by ultrasound scan evidence of cyst/nodules, was relatively poor (area under the curve [AUC] = 68.3). Stage III and IV disease was predicted with good accuracy (AUC = 84.9, sensitivity of 82.3% and specificity 75.8% at an optimal cut-off of 0.24). Conclusion(s) Our symptom-based models predict any-stage endometriosis relatively poorly and stage III and IV disease with good accuracy. Predictive tools based on such models could help to prioritize women for surgical investigation in clinical practice and thus contribute to reducing time to diagnosis. We invite other researchers to validate the key models in additional populations. PMID:22657249

  6. CFD validation experiments for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Marvin, Joseph G.

    1992-01-01

    A roadmap for CFD code validation is introduced. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments could provide new validation data.

  7. A Framework for Performing Verification and Validation in Reuse Based Software Engineering

    NASA Technical Reports Server (NTRS)

    Addy, Edward A.

    1997-01-01

    Verification and Validation (V&V) is currently performed during application development for many systems, especially safety-critical and mission- critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. The system application provides the context under which the software artifacts are validated. This paper describes a framework that extends V&V from an individual application system to a product line of systems that are developed within an architecture-based software engineering environment. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.

  8. The Validation of a Classroom Observation Instrument Based on the Construct of Teacher Adaptive Practice

    ERIC Educational Resources Information Center

    Loughland, Tony; Vlies, Penny

    2016-01-01

    Teacher adaptability is a key disposition for teachers that has been linked to outcomes of interests to schools. The aim of this study was to examine how the broader disposition of teacher adaptability might be observable as classroom-based adaptive practices using an argument-based approach to validation. The findings from the initial phase of…

  9. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.

    PubMed

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil

    2014-08-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.

  10. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods

    PubMed Central

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil

    2015-01-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called “Patient Recursive Survival Peeling” is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called “combined” cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication. PMID:26997922

  11. Statistical Validation for Clinical Measures: Repeatability and Agreement of Kinect™-Based Software.

    PubMed

    Lopez, Natalia; Perez, Elisa; Tello, Emanuel; Rodrigo, Alejandro; Valentinuzzi, Max E

    2018-01-01

    The rehabilitation process is a fundamental stage for recovery of people's capabilities. However, the evaluation of the process is performed by physiatrists and medical doctors, mostly based on their observations, that is, a subjective appreciation of the patient's evolution. This paper proposes a tracking platform of the movement made by an individual's upper limb using Kinect sensor(s) to be applied for the patient during the rehabilitation process. The main contribution is the development of quantifying software and the statistical validation of its performance, repeatability, and clinical use in the rehabilitation process. The software determines joint angles and upper limb trajectories for the construction of a specific rehabilitation protocol and quantifies the treatment evolution. In turn, the information is presented via a graphical interface that allows the recording, storage, and report of the patient's data. For clinical purposes, the software information is statistically validated with three different methodologies, comparing the measures with a goniometer in terms of agreement and repeatability. The agreement of joint angles measured with the proposed software and goniometer is evaluated with Bland-Altman plots; all measurements fell well within the limits of agreement, meaning interchangeability of both techniques. Additionally, the results of Bland-Altman analysis of repeatability show 95% confidence. Finally, the physiotherapists' qualitative assessment shows encouraging results for the clinical use. The main conclusion is that the software is capable of offering a clinical history of the patient and is useful for quantification of the rehabilitation success. The simplicity, low cost, and visualization possibilities enhance the use of the software Kinect for rehabilitation and other applications, and the expert's opinion endorses the choice of our approach for clinical practice. Comparison of the new measurement technique with established

  12. Using the web to validate document recognition results: experiments with business cards

    NASA Astrophysics Data System (ADS)

    Oertel, Clemens; O'Shea, Shauna; Bodnar, Adam; Blostein, Dorothea

    2004-12-01

    The World Wide Web is a vast information resource which can be useful for validating the results produced by document recognizers. Three computational steps are involved, all of them challenging: (1) use the recognition results in a Web search to retrieve Web pages that contain information similar to that in the document, (2) identify the relevant portions of the retrieved Web pages, and (3) analyze these relevant portions to determine what corrections (if any) should be made to the recognition result. We have conducted exploratory implementations of steps (1) and (2) in the business-card domain: we use fields of the business card to retrieve Web pages and identify the most relevant portions of those Web pages. In some cases, this information appears suitable for correcting OCR errors in the business card fields. In other cases, the approach fails due to stale information: when business cards are several years old and the business-card holder has changed jobs, then websites (such as the home page or company website) no longer contain information matching that on the business card. Our exploratory results indicate that in some domains it may be possible to develop effective means of querying the Web with recognition results, and to use this information to correct the recognition results and/or detect that the information is stale.

  13. Using the web to validate document recognition results: experiments with business cards

    NASA Astrophysics Data System (ADS)

    Oertel, Clemens; O'Shea, Shauna; Bodnar, Adam; Blostein, Dorothea

    2005-01-01

    The World Wide Web is a vast information resource which can be useful for validating the results produced by document recognizers. Three computational steps are involved, all of them challenging: (1) use the recognition results in a Web search to retrieve Web pages that contain information similar to that in the document, (2) identify the relevant portions of the retrieved Web pages, and (3) analyze these relevant portions to determine what corrections (if any) should be made to the recognition result. We have conducted exploratory implementations of steps (1) and (2) in the business-card domain: we use fields of the business card to retrieve Web pages and identify the most relevant portions of those Web pages. In some cases, this information appears suitable for correcting OCR errors in the business card fields. In other cases, the approach fails due to stale information: when business cards are several years old and the business-card holder has changed jobs, then websites (such as the home page or company website) no longer contain information matching that on the business card. Our exploratory results indicate that in some domains it may be possible to develop effective means of querying the Web with recognition results, and to use this information to correct the recognition results and/or detect that the information is stale.

  14. Development and validation of a melanoma risk score based on pooled data from 16 case-control studies

    PubMed Central

    Davies, John R; Chang, Yu-mei; Bishop, D Timothy; Armstrong, Bruce K; Bataille, Veronique; Bergman, Wilma; Berwick, Marianne; Bracci, Paige M; Elwood, J Mark; Ernstoff, Marc S; Green, Adele; Gruis, Nelleke A; Holly, Elizabeth A; Ingvar, Christian; Kanetsky, Peter A; Karagas, Margaret R; Lee, Tim K; Le Marchand, Loïc; Mackie, Rona M; Olsson, Håkan; Østerlind, Anne; Rebbeck, Timothy R; Reich, Kristian; Sasieni, Peter; Siskind, Victor; Swerdlow, Anthony J; Titus, Linda; Zens, Michael S; Ziegler, Andreas; Gallagher, Richard P.; Barrett, Jennifer H; Newton-Bishop, Julia

    2015-01-01

    Background We report the development of a cutaneous melanoma risk algorithm based upon 7 factors; hair colour, skin type, family history, freckling, nevus count, number of large nevi and history of sunburn, intended to form the basis of a self-assessment webtool for the general public. Methods Predicted odds of melanoma were estimated by analysing a pooled dataset from 16 case-control studies using logistic random coefficients models. Risk categories were defined based on the distribution of the predicted odds in the controls from these studies. Imputation was used to estimate missing data in the pooled datasets. The 30th, 60th and 90th centiles were used to distribute individuals into four risk groups for their age, sex and geographic location. Cross-validation was used to test the robustness of the thresholds for each group by leaving out each study one by one. Performance of the model was assessed in an independent UK case-control study dataset. Results Cross-validation confirmed the robustness of the threshold estimates. Cases and controls were well discriminated in the independent dataset (area under the curve 0.75, 95% CI 0.73-0.78). 29% of cases were in the highest risk group compared with 7% of controls, and 43% of controls were in the lowest risk group compared with 13% of cases. Conclusion We have identified a composite score representing an estimate of relative risk and successfully validated this score in an independent dataset. Impact This score may be a useful tool to inform members of the public about their melanoma risk. PMID:25713022

  15. Content validity of symptom-based measures for diabetic, chemotherapy, and HIV peripheral neuropathy.

    PubMed

    Gewandter, Jennifer S; Burke, Laurie; Cavaletti, Guido; Dworkin, Robert H; Gibbons, Christopher; Gover, Tony D; Herrmann, David N; Mcarthur, Justin C; McDermott, Michael P; Rappaport, Bob A; Reeve, Bryce B; Russell, James W; Smith, A Gordon; Smith, Shannon M; Turk, Dennis C; Vinik, Aaron I; Freeman, Roy

    2017-03-01

    No treatments for axonal peripheral neuropathy are approved by the United States Food and Drug Administration (FDA). Although patient- and clinician-reported outcomes are central to evaluating neuropathy symptoms, they can be difficult to assess accurately. The inability to identify efficacious treatments for peripheral neuropathies could be due to invalid or inadequate outcome measures. This systematic review examined the content validity of symptom-based measures of diabetic peripheral neuropathy, HIV neuropathy, and chemotherapy-induced peripheral neuropathy. Use of all FDA-recommended methods to establish content validity was only reported for 2 of 18 measures. Multiple sensory and motor symptoms were included in measures for all 3 conditions; these included numbness, tingling, pain, allodynia, difficulty walking, and cramping. Autonomic symptoms were less frequently included. Given significant overlap in symptoms between neuropathy etiologies, a measure with content validity for multiple neuropathies with supplemental disease-specific modules could be of great value in the development of disease-modifying treatments for peripheral neuropathies. Muscle Nerve 55: 366-372, 2017. © 2016 Wiley Periodicals, Inc.

  16. Development and validation of RAYDOSE: a Geant4-based application for molecular radiotherapy

    NASA Astrophysics Data System (ADS)

    Marcatili, S.; Pettinato, C.; Daniels, S.; Lewis, G.; Edwards, P.; Fanti, S.; Spezi, E.

    2013-04-01

    We developed and validated a Monte-Carlo-based application (RAYDOSE) to generate patient-specific 3D dose maps on the basis of pre-treatment imaging studies. A CT DICOM image is used to model patient geometry, while repeated PET scans are employed to assess radionuclide kinetics and distribution at the voxel level. In this work, we describe the structure of this application and present the tests performed to validate it against reference data and experiments. We used the spheres of a NEMA phantom to calculate S values and total doses. The comparison with reference data from OLINDA/EXM showed an agreement within 2% for a sphere size above 2.8 cm diameter. A custom heterogeneous phantom composed of several layers of Perspex and lung equivalent material was used to compare TLD measurements of gamma radiation from 131I to Monte Carlo simulations. An agreement within 5% was found. RAYDOSE has been validated against reference data and experimental measurements and can be a useful multi-modality platform for treatment planning and research in MRT.

  17. On Federated and Proof Of Validation Based Consensus Algorithms In Blockchain

    NASA Astrophysics Data System (ADS)

    Ambili, K. N.; Sindhu, M.; Sethumadhavan, M.

    2017-08-01

    Almost all real world activities have been digitized and there are various client server architecture based systems in place to handle them. These are all based on trust on third parties. There is an active attempt to successfully implement blockchain based systems which ensures that the IT systems are immutable, double spending is avoided and cryptographic strength is provided to them. A successful implementation of blockchain as backbone of existing information technology systems is bound to eliminate various types of fraud and ensure quicker delivery of the item on trade. To adapt IT systems to blockchain architecture, an efficient consensus algorithm need to be designed. Blockchain based on proof of work first came up as the backbone of cryptocurrency. After this, several other methods with variety of interesting features have come up. In this paper, we conduct a survey on existing attempts to achieve consensus in block chain. A federated consensus method and a proof of validation method are being compared.

  18. Validation of an algorithm-based definition of treatment resistance in patients with schizophrenia.

    PubMed

    Ajnakina, Olesya; Horsdal, Henriette Thisted; Lally, John; MacCabe, James H; Murray, Robin M; Gasse, Christiane; Wimberley, Theresa

    2018-02-19

    Large-scale pharmacoepidemiological research on treatment resistance relies on accurate identification of people with treatment-resistant schizophrenia (TRS) based on data that are retrievable from administrative registers. This is usually approached by operationalising clinical treatment guidelines by using prescription and hospital admission information. We examined the accuracy of an algorithm-based definition of TRS based on clozapine prescription and/or meeting algorithm-based eligibility criteria for clozapine against a gold standard definition using case notes. We additionally validated a definition entirely based on clozapine prescription. 139 schizophrenia patients aged 18-65years were followed for a mean of 5years after first presentation to psychiatric services in South-London, UK. The diagnostic accuracy of the algorithm-based measure against the gold standard was measured with sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV). A total of 45 (32.4%) schizophrenia patients met the criteria for the gold standard definition of TRS; applying the algorithm-based definition to the same cohort led to 44 (31.7%) patients fulfilling criteria for TRS with sensitivity, specificity, PPV and NPV of 62.2%, 83.0%, 63.6% and 82.1%, respectively. The definition based on lifetime clozapine prescription had sensitivity, specificity, PPV and NPV of 40.0%, 94.7%, 78.3% and 76.7%, respectively. Although a perfect definition of TRS cannot be derived from available prescription and hospital registers, these results indicate that researchers can confidently use registries to identify individuals with TRS for research and clinical practices. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Validation and detection of vessel landmarks by using anatomical knowledge

    NASA Astrophysics Data System (ADS)

    Beck, Thomas; Bernhardt, Dominik; Biermann, Christina; Dillmann, Rüdiger

    2010-03-01

    The detection of anatomical landmarks is an important prerequisite to analyze medical images fully automatically. Several machine learning approaches have been proposed to parse 3D CT datasets and to determine the location of landmarks with associated uncertainty. However, it is a challenging task to incorporate high-level anatomical knowledge to improve these classification results. We propose a new approach to validate candidates for vessel bifurcation landmarks which is also applied to systematically search missed and to validate ambiguous landmarks. A knowledge base is trained providing human-readable geometric information of the vascular system, mainly vessel lengths, radii and curvature information, for validation of landmarks and to guide the search process. To analyze the bifurcation area surrounding a vessel landmark of interest, a new approach is proposed which is based on Fast Marching and incorporates anatomical information from the knowledge base. Using the proposed algorithms, an anatomical knowledge base has been generated based on 90 manually annotated CT images containing different parts of the body. To evaluate the landmark validation a set of 50 carotid datasets has been tested in combination with a state of the art landmark detector with excellent results. Beside the carotid bifurcation the algorithm is designed to handle a wide range of vascular landmarks, e.g. celiac, superior mesenteric, renal, aortic, iliac and femoral bifurcation.

  20. Internal validation of two new retrotransposons-based kits (InnoQuant® HY and InnoTyper® 21) at a forensic lab.

    PubMed

    Martins, Cátia; Ferreira, Paulo Miguel; Carvalho, Raquel; Costa, Sandra Cristina; Farinha, Carlos; Azevedo, Luísa; Amorim, António; Oliveira, Manuela

    2018-02-01

    Obtaining a genetic profile from pieces of evidence collected at a crime scene is the primary objective of forensic laboratories. New procedures, methods, kits, software or equipment must be carefully evaluated and validated before its implementation. The constant development of new methodologies for DNA testing leads to a steady process of validation, which consists of demonstrating that the technology is robust, reproducible, and reliable throughout a defined range of conditions. The present work aims to internally validate two new retrotransposon-based kits (InnoQuant ® HY and InnoTyper ® 21), under the working conditions of the Laboratório de Polícia Científica da Polícia Judiciária (LPC-PJ). For the internal validation of InnoQuant ® HY and InnoTyper ® 21 sensitivity, repeatability, reproducibility, and mixture tests and a concordance study between these new kits and those currently in use at LPC-PJ (Quantifiler ® Duo and GlobalFiler™) were performed. The results obtained for sensitivity, repeatability, and reproducibility tests demonstrated that both InnoQuant ® HY and InnoTyper ® 21 are robust, reproducible, and reliable. The results of the concordance studies demonstrate that InnoQuant ® HY produced quantification results in nearly 29% more than Quantifiler ® Duo (indicating that this new kit is more effective in challenging samples), while the differences observed between InnoTyper ® 21 and GlobalFiler™ are not significant. Therefore, the utility of InnoTyper ® 21 has been proven, especially by the successful amplification of a greater number of complete genetic profiles (27 vs. 21). The results herein presented allowed the internal validation of both InnoQuant ® HY and InnoTyper ® 21, and their implementation in the LPC-PJ laboratory routine for the treatment of challenging samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. The development of a valid discovery-based learning module to improve students' mathematical connection

    NASA Astrophysics Data System (ADS)

    Kuneni, Erna; Mardiyana, Pramudya, Ikrar

    2017-08-01

    Geometry is the most important branch in mathematics. The purpose of teaching this material is to develop students' level of thinking for a better understanding. Otherwise, geometry in particular, has contributed students' failure in mathematics examinations. This problem occurs due to special feature in geometry which has complexity of correlation among its concept. This relates to mathematical connection. It is still difficult for students to improve this ability. This is because teachers' lack in facilitating students towards it. Eventhough, facilitating students can be in the form of teaching material. A learning module can be a solution because it consists of series activities that should be taken by students to achieve a certain goal. A series activities in this case is adopted by the phases of discovery-based learning model. Through this module, students are facilitated to discover concept by deep instruction and guidance. It can build the mathematical habits of mind and also strengthen the mathematical connection. Method used in this research was ten stages of research and development proposed by Bord and Gall. The research purpose is to create a valid learning module to improve students' mathematical connection in teaching quadrilateral. The retrieved valid module based on media expert judgment is 2,43 for eligibility chart aspect, 2,60 for eligibility presentation aspect, and 3,00 for eligibility contents aspect. Then the retrieved valid module based on material expert judgment is 3,10 for eligibility content aspect, 2,87 for eligibility presentation aspect, and 2,80 for eligibility language and legibility aspect.

  2. Marker-based or model-based RSA for evaluation of hip resurfacing arthroplasty? A clinical validation and 5-year follow-up.

    PubMed

    Lorenzen, Nina Dyrberg; Stilling, Maiken; Jakobsen, Stig Storgaard; Gustafson, Klas; Søballe, Kjeld; Baad-Hansen, Thomas

    2013-11-01

    The stability of implants is vital to ensure a long-term survival. RSA determines micro-motions of implants as a predictor of early implant failure. RSA can be performed as a marker- or model-based analysis. So far, CAD and RE model-based RSA have not been validated for use in hip resurfacing arthroplasty (HRA). A phantom study determined the precision of marker-based and CAD and RE model-based RSA on a HRA implant. In a clinical study, 19 patients were followed with stereoradiographs until 5 years after surgery. Analysis of double-examination migration results determined the clinical precision of marker-based and CAD model-based RSA, and at the 5-year follow-up, results of the total translation (TT) and the total rotation (TR) for marker- and CAD model-based RSA were compared. The phantom study showed that comparison of the precision (SDdiff) in marker-based RSA analysis was more precise than model-based RSA analysis in TT (p CAD < 0.001; p RE = 0.04) and TR (p CAD = 0.01; p RE < 0.001). The clinical precision (double examination in 8 patients) comparing the precision SDdiff was better evaluating the TT using the marker-based RSA analysis (p = 0.002), but showed no difference between the marker- and CAD model-based RSA analysis regarding the TR (p = 0.91). Comparing the mean signed values regarding the TT and the TR at the 5-year follow-up in 13 patients, the TT was lower (p = 0.03) and the TR higher (p = 0.04) in the marker-based RSA compared to CAD model-based RSA. The precision of marker-based RSA was significantly better than model-based RSA. However, problems with occluded markers lead to exclusion of many patients which was not a problem with model-based RSA. HRA were stable at the 5-year follow-up. The detection limit was 0.2 mm TT and 1° TR for marker-based and 0.5 mm TT and 1° TR for CAD model-based RSA for HRA.

  3. Validation of a school-based amblyopia screening protocol in a kindergarten population.

    PubMed

    Casas-Llera, Pilar; Ortega, Paula; Rubio, Inmaculada; Santos, Verónica; Prieto, María J; Alio, Jorge L

    2016-08-04

    To validate a school-based amblyopia screening program model by comparing its outcomes to those of a state-of-the-art conventional ophthalmic clinic examination in a kindergarten population of children between the ages of 4 and 5 years. An amblyopia screening protocol, which consisted of visual acuity measurement using Lea charts, ocular alignment test, ocular motility assessment, and stereoacuity with TNO random-dot test, was performed at school in a pediatric 4- to 5-year-old population by qualified healthcare professionals. The outcomes were validated in a selected group by a conventional ophthalmologic examination performed in a fully equipped ophthalmologic center. The ophthalmologic evaluation was used to confirm whether or not children were correctly classified by the screening protocol. The sensitivity and specificity of the test model to detect amblyopia were established. A total of 18,587 4- to 5-year-old children were subjected to the amblyopia screening program during the 2010-2011 school year. A population of 100 children were selected for the ophthalmologic validation screening. A sensitivity of 89.3%, specificity of 93.1%, positive predictive value of 83.3%, negative predictive value of 95.7%, positive likelihood ratio of 12.86, and negative likelihood ratio of 0.12 was obtained for the amblyopia screening validation model. The amblyopia screening protocol model tested in this investigation shows high sensitivity and specificity in detecting high-risk cases of amblyopia compared to the standard ophthalmologic examination. This screening program may be highly relevant for amblyopia screening at schools.

  4. Beware of external validation! - A Comparative Study of Several Validation Techniques used in QSAR Modelling.

    PubMed

    Majumdar, Subhabrata; Basak, Subhash C

    2018-04-26

    Proper validation is an important aspect of QSAR modelling. External validation is one of the widely used validation methods in QSAR where the model is built on a subset of the data and validated on the rest of the samples. However, its effectiveness for datasets with a small number of samples but large number of predictors remains suspect. Calculating hundreds or thousands of molecular descriptors using currently available software has become the norm in QSAR research, owing to computational advances in the past few decades. Thus, for n chemical compounds and p descriptors calculated for each molecule, the typical chemometric dataset today has high value of p but small n (i.e. n < p). Motivated by the evidence of inadequacies of external validation in estimating the true predictive capability of a statistical model in recent literature, this paper performs an extensive and comparative study of this method with several other validation techniques. We compared four validation methods: leave-one-out, K-fold, external and multi-split validation, using statistical models built using the LASSO regression, which simultaneously performs variable selection and modelling. We used 300 simulated datasets and one real dataset of 95 congeneric amine mutagens for this evaluation. External validation metrics have high variation among different random splits of the data, hence are not recommended for predictive QSAR models. LOO has the overall best performance among all validation methods applied in our scenario. Results from external validation are too unstable for the datasets we analyzed. Based on our findings, we recommend using the LOO procedure for validating QSAR predictive models built on high-dimensional small-sample data. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  5. Towards a model-based patient selection strategy for proton therapy: External validation of photon-derived Normal Tissue Complication Probability models in a head and neck proton therapy cohort

    PubMed Central

    Blanchard, P; Wong, AJ; Gunn, GB; Garden, AS; Mohamed, ASR; Rosenthal, DI; Crutison, J; Wu, R; Zhang, X; Zhu, XR; Mohan, R; Amin, MV; Fuller, CD; Frank, SJ

    2017-01-01

    Objective To externally validate head and neck cancer (HNC) photon-derived normal tissue complication probability (NTCP) models in patients treated with proton beam therapy (PBT). Methods This prospective cohort consisted of HNC patients treated with PBT at a single institution. NTCP models were selected based on the availability of data for validation and evaluated using the leave-one-out cross-validated area under the curve (AUC) for the receiver operating characteristics curve. Results 192 patients were included. The most prevalent tumor site was oropharynx (n=86, 45%), followed by sinonasal (n=28), nasopharyngeal (n=27) or parotid (n=27) tumors. Apart from the prediction of acute mucositis (reduction of AUC of 0.17), the models overall performed well. The validation (PBT) AUC and the published AUC were respectively 0.90 versus 0.88 for feeding tube 6 months post-PBT; 0.70 versus 0.80 for physician rated dysphagia 6 months post-PBT; 0.70 versus 0.80 for dry mouth 6 months post-PBT; and 0.73 versus 0.85 for hypothyroidism 12 months post-PBT. Conclusion While the drop in NTCP model performance was expected in PBT patients, the models showed robustness and remained valid. Further work is warranted, but these results support the validity of the model-based approach for treatment selection for HNC patients. PMID:27641784

  6. Assessing mental health clinicians' intentions to adopt evidence-based treatments: reliability and validity testing of the evidence-based treatment intentions scale.

    PubMed

    Williams, Nathaniel J

    2016-05-05

    Intentions play a central role in numerous empirically supported theories of behavior and behavior change and have been identified as a potentially important antecedent to successful evidence-based treatment (EBT) implementation. Despite this, few measures of mental health clinicians' EBT intentions exist and available measures have not been subject to thorough psychometric evaluation or testing. This paper evaluates the psychometric properties of the evidence-based treatment intentions (EBTI) scale, a new measure of mental health clinicians' intentions to adopt EBTs. The study evaluates the reliability and validity of inferences made with the EBTI using multi-method, multi-informant criterion variables collected over 12 months from a sample of 197 mental health clinicians delivering services in 13 mental health agencies. Structural, predictive, and discriminant validity evidence is assessed. Findings support the EBTI's factor structure (χ (2) = 3.96, df = 5, p = .556) and internal consistency reliability (α = .80). Predictive validity evidence was provided by robust and significant associations between EBTI scores and clinicians' observer-reported attendance at a voluntary EBT workshop at a 1-month follow-up (OR = 1.92, p < .05), self-reported EBT adoption at a 12-month follow-up (R (2) = .17, p < .001), and self-reported use of EBTs with clients at a 12-month follow-up (R (2) = .25, p < .001). Discriminant validity evidence was provided by small associations with clinicians' concurrently measured psychological work climate perceptions of functionality (R (2) = .06, p < .05), engagement (R (2) = .06, p < .05), and stress (R (2) = .00, ns). The EBTI is a practical and theoretically grounded measure of mental health clinicians' EBT intentions. Scores on the EBTI provide a basis for valid inferences regarding mental health clinicians' intentions to adopt EBTs. Discussion focuses on research and practice applications.

  7. Cross-cultural validity of the ABILOCO questionnaire for individuals with stroke, based on Rasch analysis.

    PubMed

    Avelino, Patrick Roberto; Magalhães, Lívia Castro; Faria-Fortini, Iza; Basílio, Marluce Lopes; Menezes, Kênia Kiefer Parreiras; Teixeira-Salmela, Luci Fuscaldi

    2018-06-01

    The purpose of this study was to evaluate the cross-cultural validity of the Brazilian version of the ABILOCO questionnaire for stroke subjects. Cross-cultural adaptation of the original English version of the ABILOCO to the Brazilian-Portuguese language followed standardized procedures. The adapted version was administered to 136 stroke subjects and its measurement properties were assessed using Rash analysis. Cross-cultural validity was based on cultural invariance analyses. Goodness-of-fit analysis revealed one misfitting item. The principal component analysis of the residuals showed that the first dimension explained 45% of the variance in locomotion ability; however, the eigenvalue was 1.92. The ABILOCO-Brazil divided the sample into two levels of ability and the items into about seven levels of difficulty. The item-person map showed some ceiling effect. Cultural invariance analyses revealed that although there were differences in the item calibrations between the ABILOCO-original and ABILOCO-Brazil, they did not impact the measures of locomotion ability. The ABILOCO-Brazil demonstrated satisfactory measurement properties to be used within both clinical and research contexts in Brazil, as well cross-cultural validity to be used in international/multicentric studies. However, the presence of ceiling effect suggests that it may not be appropriate for the assessment of individuals with high levels of locomotion ability. Implications for rehabilitation Self-report measures of locomotion ability are clinically important, since they describe the abilities of the individuals within real life contexts. The ABILOCO questionnaire, specific for stroke survivors, demonstrated satisfactory measurement properties, but may not be most appropriate to assess individuals with high levels of locomotion ability The results of the cross-cultural validity showed that the ABILOCO-Original and the ABILOCO-Brazil calibrations may be used interchangeable.

  8. Assessing Computer Literacy: A Validated Instrument and Empirical Results.

    ERIC Educational Resources Information Center

    Gabriel, Roy M.

    1985-01-01

    Describes development of a comprehensive computer literacy assessment battery for K-12 curriculum based on objectives of a curriculum implemented in the Worldwide Department of Defense Dependents Schools system. Test development and field test data are discussed and a correlational analysis which assists in interpretation of test results is…

  9. Development and Validation of a Theory Based Screening Process for Suicide Risk

    DTIC Science & Technology

    2015-09-01

    not be delayed until all data have been collected. This is with particular respect to our data that confirms soldiers under report suicide ideation ...and that while they say that they would inform loved ones about suicidal thoughts, over 50% of soldiers who endorse ideation have not told anyone...AD_________________ Award Number: W81XWH-11-1-0588 TITLE: Development and Validation of a Theory Based Screening Process for Suicide Risk

  10. Is Teacher Assessment Reliable or Valid for High School Students under a Web-Based Portfolio Environment?

    ERIC Educational Resources Information Center

    Chang, Chi-Cheng; Wu, Bing-Hong

    2012-01-01

    This study explored the reliability and validity of teacher assessment under a Web-based portfolio assessment environment (or Web-based teacher portfolio assessment). Participants were 72 eleventh graders taking the "Computer Application" course. The students perform portfolio creation, inspection, self- and peer-assessment using the Web-based…

  11. Diagnostic value of blood-derived microRNAs for schizophrenia: results of a meta-analysis and validation.

    PubMed

    Liu, Sha; Zhang, Fuquan; Wang, Xijin; Shugart, Yin Yao; Zhao, Yingying; Li, Xinrong; Liu, Zhifen; Sun, Ning; Yang, Chunxia; Zhang, Kerang; Yue, Weihua; Yu, Xin; Xu, Yong

    2017-11-10

    There is an increasing interest in searching biomarkers for schizophrenia (SZ) diagnosis, which overcomes the drawbacks inherent with the subjective diagnostic methods. MicroRNA (miRNA) fingerprints have been explored for disease diagnosis. We performed a meta-analysis to examine miRNA diagnostic value for SZ and further validated the meta-analysis results. Using following terms: schizophrenia/SZ, microRNA/miRNA, diagnosis, sensitivity and specificity, we searched databases restricted to English language and reviewed all articles published from January 1990 to October 2016. All extracted data were statistically analyzed and the results were further validated with peripheral blood mononuclear cells (PBMNCs) isolated from patients and healthy controls using RT-qPCR and receiver operating characteristic (ROC) analysis. A total of 6 studies involving 330 patients and 202 healthy controls were included for meta-analysis. The pooled sensitivity, specificity and diagnostic odds ratio were 0.81 (95% CI: 0.75-0.86), 0.81 (95% CI: 0.72-0.88) and 18 (95% CI: 9-34), respectively; the positive and negative likelihood ratio was 4.3 and 0.24 respectively; the area under the curve in summary ROC was 0.87 (95% CI: 0.84-0.90). Validation revealed that miR-181b-5p, miR-21-5p, miR-195-5p, miR-137, miR-346 and miR-34a-5p in PBMNCs had high diagnostic sensitivity and specificity in the context of schizophrenia. In conclusion, blood-derived miRNAs might be promising biomarkers for SZ diagnosis.

  12. Preliminary results of the Geoid Slope Validation Survey 2014 in Iowa

    NASA Astrophysics Data System (ADS)

    Wang, Y. M.; Becker, C.; Breidenbach, S.; Geoghegan, C.; Martin, D.; Winester, D.; Hanson, T.; Mader, G. L.; Eckl, M. C.

    2014-12-01

    The National Geodetic Survey conducted a second Geoid Slope Validation Survey in the summer of 2014 (GSVS14). The survey took place in Iowa along U.S Route 30. The survey line is approximately 200 miles long (325 km), extending from Denison, IA to Cedar Rapids, IA. There are over 200 official survey bench marks. A leveling survey was performed, conforming to 1st order, class II specifications. A GPS survey was performed using 24 to 48 hour occupations. Absolute gravity, relative gravity, and gravity gradient measurements were also collected during the survey. In addition, deflections of the vertical were acquired at 200 eccentric survey benchmarks using the Compact Digital Astrometric Camera (CODIAC) camera. This paper presents the preliminary results of the survey, including the accuracy analysis of the leveling data, GPS ellipsoidal heights, and the deflections of the vertical which serves as an independent data set in addition to the GPS/leveling implied geoid heights.

  13. The earth radiation budget experiment: Early validation results

    NASA Astrophysics Data System (ADS)

    Smith, G. Louis; Barkstrom, Bruce R.; Harrison, Edwin F.

    The Earth Radiation Budget Experiment (ERBE) consists of radiometers on a dedicated spacecraft in a 57° inclination orbit, which has a precessional period of 2 months, and on two NOAA operational meteorological spacecraft in near polar orbits. The radiometers include scanning narrow field-of-view (FOV) and nadir-looking wide and medium FOV radiometers covering the ranges 0.2 to 5 μm and 5 to 50 μm and a solar monitoring channel. This paper describes the validation procedures and preliminary results. Each of the radiometer channels underwent extensive ground calibration, and the instrument packages include in-flight calibration facilities which, to date, show negligible changes of the instruments in orbit, except for gradual degradation of the suprasil dome of the shortwave wide FOV (about 4% per year). Measurements of the solar constant by the solar monitors, wide FOV, and medium FOV radiometers of two spacecraft agree to a fraction of a percent. Intercomparisons of the wide and medium FOV radiometers with the scanning radiometers show agreement of 1 to 4%. The multiple ERBE satellites are acquiring the first global measurements of regional scale diurnal variations in the Earth's radiation budget. These diurnal variations are verified by comparison with high temporal resolution geostationary satellite data. Other principal investigators of the ERBE Science Team are: R. Cess, SUNY, Stoneybrook; J. Coakley, NCAR; C. Duncan, M. King and A Mecherikunnel, Goddard Space Flight Center, NASA; A. Gruber and A.J. Miller, NOAA; D. Hartmann, U. Washington; F.B. House, Drexel U.; F.O. Huck, Langley Research Center, NASA; G. Hunt, Imperial College, London U.; R. Kandel and A. Berroir, Laboratory of Dynamic Meteorology, Ecole Polytechique; V. Ramanathan, U. Chicago; E. Raschke, U. of Cologne; W.L. Smith, U. of Wisconsin and T.H. Vonder Haar, Colorado State U.

  14. Validation of a DNA IQ-based extraction method for TECAN robotic liquid handling workstations for processing casework.

    PubMed

    Frégeau, Chantal J; Lett, C Marc; Fourney, Ron M

    2010-10-01

    A semi-automated DNA extraction process for casework samples based on the Promega DNA IQ™ system was optimized and validated on TECAN Genesis 150/8 and Freedom EVO robotic liquid handling stations configured with fixed tips and a TECAN TE-Shake™ unit. The use of an orbital shaker during the extraction process promoted efficiency with respect to DNA capture, magnetic bead/DNA complex washes and DNA elution. Validation studies determined the reliability and limitations of this shaker-based process. Reproducibility with regards to DNA yields for the tested robotic workstations proved to be excellent and not significantly different than that offered by the manual phenol/chloroform extraction. DNA extraction of animal:human blood mixtures contaminated with soil demonstrated that a human profile was detectable even in the presence of abundant animal blood. For exhibits containing small amounts of biological material, concordance studies confirmed that DNA yields for this shaker-based extraction process are equivalent or greater to those observed with phenol/chloroform extraction as well as our original validated automated magnetic bead percolation-based extraction process. Our data further supports the increasing use of robotics for the processing of casework samples. Crown Copyright © 2009. Published by Elsevier Ireland Ltd. All rights reserved.

  15. Validity of Peer Evaluation for Team-Based Learning in a Dental School in Japan.

    PubMed

    Nishigawa, Keisuke; Hayama, Rika; Omoto, Katsuhiro; Okura, Kazuo; Tajima, Toyoko; Suzuki, Yoshitaka; Hosoki, Maki; Ueda, Mayu; Inoue, Miho; Rodis, Omar Marianito Maningo; Matsuka, Yoshizo

    2017-12-01

    The aim of this study was to determine the validity of peer evaluation for team-based learning (TBL) classes in dental education in comparison with the term-end examination records and TBL class scores. Examination and TBL class records of 256 third- and fourth-year dental students in six fixed prosthodontics courses from 2013 to 2015 in one dental school in Japan were investigated. Results of the term-end examination during those courses, individual readiness assurance test (IRAT), group readiness assurance test (GRAT), group assignment projects (GAP), and peer evaluation of group members in TBL classes were collected. Significant positive correlations were found between all combinations of peer evaluation, IRAT, and term-end examination. Individual scores also showed a positive correlation with group score (total of GRAT and GAP). From the investigation of the correlations in the six courses, significant positive correlations between peer evaluation and individual score were found in four of the six courses. In this study, peer evaluation seemed to be a valid index for learning performance in TBL classes. To verify the effectiveness of peer evaluation, all students have to realize the significance of scoring the team member's performance. Clear criteria and detailed instruction for appropriate evaluation are also required.

  16. Validation of OMI erythemal doses with multi-sensor ground-based measurements in Thessaloniki, Greece

    NASA Astrophysics Data System (ADS)

    Zempila, Melina Maria; Fountoulakis, Ilias; Taylor, Michael; Kazadzis, Stelios; Arola, Antti; Koukouli, Maria Elissavet; Bais, Alkiviadis; Meleti, Chariklia; Balis, Dimitrios

    2018-06-01

    The aim of this study is to validate the Ozone Monitoring Instrument (OMI) erythemal dose rates using ground-based measurements in Thessaloniki, Greece. In the Laboratory of Atmospheric Physics of the Aristotle University of Thessaloniki, a Yankee Environmental System UVB-1 radiometer measures the erythemal dose rates every minute, and a Norsk Institutt for Luftforskning (NILU) multi-filter radiometer provides multi-filter based irradiances that were used to derive erythemal dose rates for the period 2005-2014. Both these datasets were independently validated against collocated UV irradiance spectra from a Brewer MkIII spectrophotometer. Cloud detection was performed based on measurements of the global horizontal radiation from a Kipp & Zonen pyranometer and from NILU measurements in the visible range. The satellite versus ground observation validation was performed taking into account the effect of temporal averaging, limitations related to OMI quality control criteria, cloud conditions, the solar zenith angle and atmospheric aerosol loading. Aerosol optical depth was also retrieved using a collocated CIMEL sunphotometer in order to assess its impact on the comparisons. The effect of total ozone columns satellite versus ground-based differences on the erythemal dose comparisons was also investigated. Since most of the public awareness alerts are based on UV Index (UVI) classifications, an analysis and assessment of OMI capability for retrieving UVIs was also performed. An overestimation of the OMI erythemal product by 3-6% and 4-8% with respect to ground measurements is observed when examining overpass and noontime estimates respectively. The comparisons revealed a relatively small solar zenith angle dependence, with the OMI data showing a slight dependence on aerosol load, especially at high aerosol optical depth values. A mean underestimation of 2% in OMI total ozone columns under cloud-free conditions was found to lead to an overestimation in OMI erythemal

  17. Real-time remote scientific model validation

    NASA Technical Reports Server (NTRS)

    Frainier, Richard; Groleau, Nicolas

    1994-01-01

    This paper describes flight results from the use of a CLIPS-based validation facility to compare analyzed data from a space life sciences (SLS) experiment to an investigator's preflight model. The comparison, performed in real-time, either confirms or refutes the model and its predictions. This result then becomes the basis for continuing or modifying the investigator's experiment protocol. Typically, neither the astronaut crew in Spacelab nor the ground-based investigator team are able to react to their experiment data in real time. This facility, part of a larger science advisor system called Principal Investigator in a Box, was flown on the space shuttle in October, 1993. The software system aided the conduct of a human vestibular physiology experiment and was able to outperform humans in the tasks of data integrity assurance, data analysis, and scientific model validation. Of twelve preflight hypotheses associated with investigator's model, seven were confirmed and five were rejected or compromised.

  18. Hypersonic Experimental and Computational Capability, Improvement and Validation. Volume 2

    NASA Technical Reports Server (NTRS)

    Muylaert, Jean (Editor); Kumar, Ajay (Editor); Dujarric, Christian (Editor)

    1998-01-01

    The results of the phase 2 effort conducted under AGARD Working Group 18 on Hypersonic Experimental and Computational Capability, Improvement and Validation are presented in this report. The first volume, published in May 1996, mainly focused on the design methodology, plans and some initial results of experiments that had been conducted to serve as validation benchmarks. The current volume presents the detailed experimental and computational data base developed during this effort.

  19. Hydrological Validation of The Lpj Dynamic Global Vegetation Model - First Results and Required Actions

    NASA Astrophysics Data System (ADS)

    Haberlandt, U.; Gerten, D.; Schaphoff, S.; Lucht, W.

    Dynamic global vegetation models are developed with the main purpose to describe the spatio-temporal dynamics of vegetation at the global scale. Increasing concern about climate change impacts has put the focus of recent applications on the sim- ulation of the global carbon cycle. Water is a prime driver of biogeochemical and biophysical processes, thus an appropriate representation of the water cycle is crucial for their proper simulation. However, these models usually lack thorough validation of the water balance they produce. Here we present a hydrological validation of the current version of the LPJ (Lund- Potsdam-Jena) model, a dynamic global vegetation model operating at daily time steps. Long-term simulated runoff and evapotranspiration are compared to literature values, results from three global hydrological models, and discharge observations from various macroscale river basins. It was found that the seasonal and spatial patterns of the LPJ-simulated average values correspond well both with the measurements and the results from the stand-alone hy- drological models. However, a general underestimation of runoff occurs, which may be attributable to the low input dynamics of precipitation (equal distribution within a month), to the simulated vegetation pattern (potential vegetation without anthro- pogenic influence), and to some generalizations of the hydrological components in LPJ. Future research will focus on a better representation of the temporal variability of climate forcing, improved description of hydrological processes, and on the consider- ation of anthropogenic land use.

  20. Spatiotemporal dynamics of grassland aboveground biomass on the Qinghai-Tibet Plateau based on validated MODIS NDVI.

    PubMed

    Liu, Shiliang; Cheng, Fangyan; Dong, Shikui; Zhao, Haidi; Hou, Xiaoyun; Wu, Xue

    2017-06-23

    Spatiotemporal dynamics of aboveground biomass (AGB) is a fundamental problem for grassland environmental management on the Qinghai-Tibet Plateau (QTP). Moderate Resolution Imaging Spectroradiometer (MODIS) Normalized Difference Vegetation Index (NDVI) data can feasibly be used to estimate AGB at large scales, and their precise validation is necessary to utilize them effectively. In our study, the clip-harvest method was used at 64 plots in QTP grasslands to obtain actual AGB values, and a handheld hyperspectral spectrometer was used to calculate field-measured NDVI to validate MODIS NDVI. Based on the models between NDVI and AGB, AGB dynamics trends during 2000-2012 were analyzed. The results showed that the AGB in QTP grasslands increased during the study period, with 70% of the grasslands undergoing increases mainly in the Qinghai Province. Also, the meadow showed a larger increasing trend than steppe. Future AGB dynamic trends were also investigated using a combined analysis of the slope values and the Hurst exponent. The results showed high sustainability of AGB dynamics trends after the study period. Predictions indicate 60% of the steppe and meadow grasslands would continue to increase in AGB, while 25% of the grasslands would remain in degradation, with most of them distributing in Tibet.