Sample records for validation experiments performed

  1. Electrolysis Performance Improvement and Validation Experiment

    NASA Technical Reports Server (NTRS)

    Schubert, Franz H.

    1992-01-01

    Viewgraphs on electrolysis performance improvement and validation experiment are presented. Topics covered include: water electrolysis: an ever increasing need/role for space missions; static feed electrolysis (SFE) technology: a concept developed for space applications; experiment objectives: why test in microgravity environment; and experiment description: approach, hardware description, test sequence and schedule.

  2. Validating the BISON fuel performance code to integral LWR experiments

    DOE PAGES

    Williamson, R. L.; Gamble, K. A.; Perez, D. M.; ...

    2016-03-24

    BISON is a modern finite element-based nuclear fuel performance code that has been under development at the Idaho National Laboratory (INL) since 2009. The code is applicable to both steady and transient fuel behavior and has been used to analyze a variety of fuel forms in 1D spherical, 2D axisymmetric, or 3D geometries. Code validation is underway and is the subject of this study. A brief overview of BISON’s computational framework, governing equations, and general material and behavioral models is provided. BISON code and solution verification procedures are described, followed by a summary of the experimental data used to datemore » for validation of Light Water Reactor (LWR) fuel. Validation comparisons focus on fuel centerline temperature, fission gas release, and rod diameter both before and following fuel-clad mechanical contact. Comparisons for 35 LWR rods are consolidated to provide an overall view of how the code is predicting physical behavior, with a few select validation cases discussed in greater detail. Our results demonstrate that 1) fuel centerline temperature comparisons through all phases of fuel life are very reasonable with deviations between predictions and experimental data within ±10% for early life through high burnup fuel and only slightly out of these bounds for power ramp experiments, 2) accuracy in predicting fission gas release appears to be consistent with state-of-the-art modeling and with the involved uncertainties and 3) comparison of rod diameter results indicates a tendency to overpredict clad diameter reduction early in life, when clad creepdown dominates, and more significantly overpredict the diameter increase late in life, when fuel expansion controls the mechanical response. In the initial rod diameter comparisons they were unsatisfactory and have lead to consideration of additional separate effects experiments to better understand and predict clad and fuel mechanical behavior. Results from this study are being used to

  3. Validation of design procedure and performance modeling of a heat and fluid transport field experiment in the unsaturated zone

    NASA Astrophysics Data System (ADS)

    Nir, A.; Doughty, C.; Tsang, C. F.

    Validation methods which developed in the context of deterministic concepts of past generations often cannot be directly applied to environmental problems, which may be characterized by limited reproducibility of results and highly complex models. Instead, validation is interpreted here as a series of activities, including both theoretical and experimental tests, designed to enhance our confidence in the capability of a proposed model to describe some aspect of reality. We examine the validation process applied to a project concerned with heat and fluid transport in porous media, in which mathematical modeling, simulation, and results of field experiments are evaluated in order to determine the feasibility of a system for seasonal thermal energy storage in shallow unsaturated soils. Technical details of the field experiments are not included, but appear in previous publications. Validation activities are divided into three stages. The first stage, carried out prior to the field experiments, is concerned with modeling the relevant physical processes, optimization of the heat-exchanger configuration and the shape of the storage volume, and multi-year simulation. Subjects requiring further theoretical and experimental study are identified at this stage. The second stage encompasses the planning and evaluation of the initial field experiment. Simulations are made to determine the experimental time scale and optimal sensor locations. Soil thermal parameters and temperature boundary conditions are estimated using an inverse method. Then results of the experiment are compared with model predictions using different parameter values and modeling approximations. In the third stage, results of an experiment performed under different boundary conditions are compared to predictions made by the models developed in the second stage. Various aspects of this theoretical and experimental field study are described as examples of the verification and validation procedure. There is no

  4. CFD validation experiments for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Marvin, Joseph G.

    1992-01-01

    A roadmap for CFD code validation is introduced. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments could provide new validation data.

  5. Validation results of satellite mock-up capturing experiment using nets

    NASA Astrophysics Data System (ADS)

    Medina, Alberto; Cercós, Lorenzo; Stefanescu, Raluca M.; Benvenuto, Riccardo; Pesce, Vincenzo; Marcon, Marco; Lavagna, Michèle; González, Iván; Rodríguez López, Nuria; Wormnes, Kjetil

    2017-05-01

    The PATENDER activity (Net parametric characterization and parabolic flight), funded by the European Space Agency (ESA) via its Clean Space initiative, was aiming to validate a simulation tool for designing nets for capturing space debris. This validation has been performed through a set of different experiments under microgravity conditions where a net was launched capturing and wrapping a satellite mock-up. This paper presents the architecture of the thrown-net dynamics simulator together with the set-up of the deployment experiment and its trajectory reconstruction results on a parabolic flight (Novespace A-310, June 2015). The simulator has been implemented within the Blender framework in order to provide a highly configurable tool, able to reproduce different scenarios for Active Debris Removal missions. The experiment has been performed over thirty parabolas offering around 22 s of zero-g conditions. Flexible meshed fabric structure (the net) ejected from a container and propelled by corner masses (the bullets) arranged around its circumference have been launched at different initial velocities and launching angles using a pneumatic-based dedicated mechanism (representing the chaser satellite) against a target mock-up (the target satellite). High-speed motion cameras were recording the experiment allowing 3D reconstruction of the net motion. The net knots have been coloured to allow the images post-process using colour segmentation, stereo matching and iterative closest point (ICP) for knots tracking. The final objective of the activity was the validation of the net deployment and wrapping simulator using images recorded during the parabolic flight. The high-resolution images acquired have been post-processed to determine accurately the initial conditions and generate the reference data (position and velocity of all knots of the net along its deployment and wrapping of the target mock-up) for the simulator validation. The simulator has been properly

  6. Validation Experiments for Spent-Fuel Dry-Cask In-Basket Convection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Barton L.

    2016-08-16

    This work consisted of the following major efforts; 1. Literature survey on validation of external natural convection; 2. Design the experiment; 3. Build the experiment; 4. Run the experiment; 5. Collect results; 6. Disseminate results; and 7. Perform a CFD validation study using the results. We note that while all tasks are complete, some deviations from the original plan were made. Specifically, geometrical changes in the parameter space were skipped in favor of flow condition changes, which were found to be much more practical to implement. Changing the geometry required new as-built measurements, which proved extremely costly and impractical givenmore » the time and funds available« less

  7. SeaSat-A Satellite Scatterometer (SASS) Validation and Experiment Plan

    NASA Technical Reports Server (NTRS)

    Schroeder, L. C. (Editor)

    1978-01-01

    This plan was generated by the SeaSat-A satellite scatterometer experiment team to define the pre-and post-launch activities necessary to conduct sensor validation and geophysical evaluation. Details included are an instrument and experiment description/performance requirements, success criteria, constraints, mission requirements, data processing requirement and data analysis responsibilities.

  8. SATS HVO Concept Validation Experiment

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria; Williams, Daniel; Murdoch, Jennifer; Adams, Catherine

    2005-01-01

    A human-in-the-loop simulation experiment was conducted at the NASA Langley Research Center s (LaRC) Air Traffic Operations Lab (ATOL) in an effort to comprehensively validate tools and procedures intended to enable the Small Aircraft Transportation System, Higher Volume Operations (SATS HVO) concept of operations. The SATS HVO procedures were developed to increase the rate of operations at non-towered, non-radar airports in near all-weather conditions. A key element of the design is the establishment of a volume of airspace around designated airports where pilots accept responsibility for self-separation. Flights operating at these airports, are given approach sequencing information computed by a ground based automated system. The SATS HVO validation experiment was conducted in the ATOL during the spring of 2004 in order to determine if a pilot can safely and proficiently fly an airplane while performing SATS HVO procedures. Comparative measures of flight path error, perceived workload and situation awareness were obtained for two types of scenarios. Baseline scenarios were representative of today s system utilizing procedure separation, where air traffic control grants one approach or departure clearance at a time. SATS HVO scenarios represented approaches and departure procedures as described in the SATS HVO concept of operations. Results from the experiment indicate that low time pilots were able to fly SATS HVO procedures and maintain self-separation as safely and proficiently as flying today's procedures.

  9. Bayesian cross-entropy methodology for optimal design of validation experiments

    NASA Astrophysics Data System (ADS)

    Jiang, X.; Mahadevan, S.

    2006-07-01

    An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.

  10. Validation of the updated ArthroS simulator: face and construct validity of a passive haptic virtual reality simulator with novel performance metrics.

    PubMed

    Garfjeld Roberts, Patrick; Guyver, Paul; Baldwin, Mathew; Akhtar, Kash; Alvand, Abtin; Price, Andrew J; Rees, Jonathan L

    2017-02-01

    To assess the construct and face validity of ArthroS, a passive haptic VR simulator. A secondary aim was to evaluate the novel performance metrics produced by this simulator. Two groups of 30 participants, each divided into novice, intermediate or expert based on arthroscopic experience, completed three separate tasks on either the knee or shoulder module of the simulator. Performance was recorded using 12 automatically generated performance metrics and video footage of the arthroscopic procedures. The videos were blindly assessed using a validated global rating scale (GRS). Participants completed a survey about the simulator's realism and training utility. This new simulator demonstrated construct validity of its tasks when evaluated against a GRS (p ≤ 0.003 in all cases). Regarding it's automatically generated performance metrics, established outputs such as time taken (p ≤ 0.001) and instrument path length (p ≤ 0.007) also demonstrated good construct validity. However, two-thirds of the proposed 'novel metrics' the simulator reports could not distinguish participants based on arthroscopic experience. Face validity assessment rated the simulator as a realistic and useful tool for trainees, but the passive haptic feedback (a key feature of this simulator) is rated as less realistic. The ArthroS simulator has good task construct validity based on established objective outputs, but some of the novel performance metrics could not distinguish between surgical experience. The passive haptic feedback of the simulator also needs improvement. If simulators could offer automated and validated performance feedback, this would facilitate improvements in the delivery of training by allowing trainees to practise and self-assess.

  11. Reconceptualising the external validity of discrete choice experiments.

    PubMed

    Lancsar, Emily; Swait, Joffre

    2014-10-01

    External validity is a crucial but under-researched topic when considering using discrete choice experiment (DCE) results to inform decision making in clinical, commercial or policy contexts. We present the theory and tests traditionally used to explore external validity that focus on a comparison of final outcomes and review how this traditional definition has been empirically tested in health economics and other sectors (such as transport, environment and marketing) in which DCE methods are applied. While an important component, we argue that the investigation of external validity should be much broader than a comparison of final outcomes. In doing so, we introduce a new and more comprehensive conceptualisation of external validity, closely linked to process validity, that moves us from the simple characterisation of a model as being or not being externally valid on the basis of predictive performance, to the concept that external validity should be an objective pursued from the initial conceptualisation and design of any DCE. We discuss how such a broader definition of external validity can be fruitfully used and suggest innovative ways in which it can be explored in practice.

  12. Virtual experiments, physical validation: dental morphology at the intersection of experiment and theory

    PubMed Central

    Anderson, P. S. L.; Rayfield, E. J.

    2012-01-01

    Computational models such as finite-element analysis offer biologists a means of exploring the structural mechanics of biological systems that cannot be directly observed. Validated against experimental data, a model can be manipulated to perform virtual experiments, testing variables that are hard to control in physical experiments. The relationship between tooth form and the ability to break down prey is key to understanding the evolution of dentition. Recent experimental work has quantified how tooth shape promotes fracture in biological materials. We present a validated finite-element model derived from physical compression experiments. The model shows close agreement with strain patterns observed in photoelastic test materials and reaction forces measured during these experiments. We use the model to measure strain energy within the test material when different tooth shapes are used. Results show that notched blades deform materials for less strain energy cost than straight blades, giving insights into the energetic relationship between tooth form and prey materials. We identify a hypothetical ‘optimal’ blade angle that minimizes strain energy costs and test alternative prey materials via virtual experiments. Using experimental data and computational models offers an integrative approach to understand the mechanics of tooth morphology. PMID:22399789

  13. Experience with Aero- and Fluid-Dynamic Testing for Engineering and CFD Validation

    NASA Technical Reports Server (NTRS)

    Ross, James C.

    2016-01-01

    Ever since computations have been used to simulate aerodynamics the need to ensure that the computations adequately represent real life has followed. Many experiments have been performed specifically for validation and as computational methods have improved, so have the validation experiments. Validation is also a moving target because computational methods improve requiring validation for the new aspect of flow physics that the computations aim to capture. Concurrently, new measurement techniques are being developed that can help capture more detailed flow features pressure sensitive paint (PSP) and particle image velocimetry (PIV) come to mind. This paper will present various wind-tunnel tests the author has been involved with and how they were used for validation of various kinds of CFD. A particular focus is the application of advanced measurement techniques to flow fields (and geometries) that had proven to be difficult to predict computationally. Many of these difficult flow problems arose from engineering and development problems that needed to be solved for a particular vehicle or research program. In some cases the experiments required to solve the engineering problems were refined to provide valuable CFD validation data in addition to the primary engineering data. All of these experiments have provided physical insight and validation data for a wide range of aerodynamic and acoustic phenomena for vehicles ranging from tractor-trailers to crewed spacecraft.

  14. Progress Towards a Microgravity CFD Validation Study Using the ISS SPHERES-SLOSH Experiment

    NASA Technical Reports Server (NTRS)

    Storey, Jedediah M.; Kirk, Daniel; Marsell, Brandon (Editor); Schallhorn, Paul (Editor)

    2017-01-01

    Understanding, predicting, and controlling fluid slosh dynamics is critical to safety and improving performance of space missions when a significant percentage of the spacecrafts mass is a liquid. Computational fluid dynamics simulations can be used to predict the dynamics of slosh, but these programs require extensive validation. Many CFD programs have been validated by slosh experiments using various fluids in earth gravity, but prior to the ISS SPHERES-Slosh experiment1, little experimental data for long-duration, zero-gravity slosh existed. This paper presents the current status of an ongoing CFD validation study using the ISS SPHERES-Slosh experimental data.

  15. Progress Towards a Microgravity CFD Validation Study Using the ISS SPHERES-SLOSH Experiment

    NASA Technical Reports Server (NTRS)

    Storey, Jed; Kirk, Daniel (Editor); Marsell, Brandon (Editor); Schallhorn, Paul (Editor)

    2017-01-01

    Understanding, predicting, and controlling fluid slosh dynamics is critical to safety and improving performance of space missions when a significant percentage of the spacecrafts mass is a liquid. Computational fluid dynamics simulations can be used to predict the dynamics of slosh, but these programs require extensive validation. Many CFD programs have been validated by slosh experiments using various fluids in earth gravity, but prior to the ISS SPHERES-Slosh experiment, little experimental data for long-duration, zero-gravity slosh existed. This paper presents the current status of an ongoing CFD validation study using the ISS SPHERES-Slosh experimental data.

  16. Digital Fly-By-Wire Flight Control Validation Experience

    NASA Technical Reports Server (NTRS)

    Szalai, K. J.; Jarvis, C. R.; Krier, G. E.; Megna, V. A.; Brock, L. D.; Odonnell, R. N.

    1978-01-01

    The experience gained in digital fly-by-wire technology through a flight test program being conducted by the NASA Dryden Flight Research Center in an F-8C aircraft is described. The system requirements are outlined, along with the requirements for flight qualification. The system is described, including the hardware components, the aircraft installation, and the system operation. The flight qualification experience is emphasized. The qualification process included the theoretical validation of the basic design, laboratory testing of the hardware and software elements, systems level testing, and flight testing. The most productive testing was performed on an iron bird aircraft, which used the actual electronic and hydraulic hardware and a simulation of the F-8 characteristics to provide the flight environment. The iron bird was used for sensor and system redundancy management testing, failure modes and effects testing, and stress testing in many cases with the pilot in the loop. The flight test program confirmed the quality of the validation process by achieving 50 flights without a known undetected failure and with no false alarms.

  17. Disruption Tolerant Networking Flight Validation Experiment on NASA's EPOXI Mission

    NASA Technical Reports Server (NTRS)

    Wyatt, Jay; Burleigh, Scott; Jones, Ross; Torgerson, Leigh; Wissler, Steve

    2009-01-01

    In October and November of 2008, the Jet Propulsion Laboratory installed and tested essential elements of Delay/Disruption Tolerant Networking (DTN) technology on the Deep Impact spacecraft. This experiment, called Deep Impact Network Experiment (DINET), was performed in close cooperation with the EPOXI project which has responsibility for the spacecraft. During DINET some 300 images were transmitted from the JPL nodes to the spacecraft. Then they were automatically forwarded from the spacecraft back to the JPL nodes, exercising DTN's bundle origination, transmission, acquisition, dynamic route computation, congestion control, prioritization, custody transfer, and automatic retransmission procedures, both on the spacecraft and on the ground, over a period of 27 days. All transmitted bundles were successfully received, without corruption. The DINET experiment demonstrated DTN readiness for operational use in space missions. This activity was part of a larger NASA space DTN development program to mature DTN to flight readiness for a wide variety of mission types by the end of 2011. This paper describes the DTN protocols, the flight demo implementation, validation metrics which were created for the experiment, and validation results.

  18. Performance Evaluation of a Data Validation System

    NASA Technical Reports Server (NTRS)

    Wong, Edmond (Technical Monitor); Sowers, T. Shane; Santi, L. Michael; Bickford, Randall L.

    2005-01-01

    Online data validation is a performance-enhancing component of modern control and health management systems. It is essential that performance of the data validation system be verified prior to its use in a control and health management system. A new Data Qualification and Validation (DQV) Test-bed application was developed to provide a systematic test environment for this performance verification. The DQV Test-bed was used to evaluate a model-based data validation package known as the Data Quality Validation Studio (DQVS). DQVS was employed as the primary data validation component of a rocket engine health management (EHM) system developed under NASA's NGLT (Next Generation Launch Technology) program. In this paper, the DQVS and DQV Test-bed software applications are described, and the DQV Test-bed verification procedure for this EHM system application is presented. Test-bed results are summarized and implications for EHM system performance improvements are discussed.

  19. Results from SMAP Validation Experiments 2015 and 2016

    NASA Astrophysics Data System (ADS)

    Colliander, A.; Jackson, T. J.; Cosh, M. H.; Misra, S.; Crow, W.; Powers, J.; Wood, E. F.; Mohanty, B.; Judge, J.; Drewry, D.; McNairn, H.; Bullock, P.; Berg, A. A.; Magagi, R.; O'Neill, P. E.; Yueh, S. H.

    2017-12-01

    NASA's Soil Moisture Active Passive (SMAP) mission was launched in January 2015. The objective of the mission is global mapping of soil moisture and freeze/thaw state. Well-characterized sites with calibrated in situ soil moisture measurements are used to determine the quality of the soil moisture data products; these sites are designated as core validation sites (CVS). To support the CVS-based validation, airborne field experiments are used to provide high-fidelity validation data and to improve the SMAP retrieval algorithms. The SMAP project and NASA coordinated airborne field experiments at three CVS locations in 2015 and 2016. SMAP Validation Experiment 2015 (SMAPVEX15) was conducted around the Walnut Gulch CVS in Arizona in August, 2015. SMAPVEX16 was conducted at the South Fork CVS in Iowa and Carman CVS in Manitoba, Canada from May to August 2016. The airborne PALS (Passive Active L-band Sensor) instrument mapped all experiment areas several times resulting in 30 coincidental measurements with SMAP. The experiments included intensive ground sampling regime consisting of manual sampling and augmentation of the CVS soil moisture measurements with temporary networks of soil moisture sensors. Analyses using the data from these experiments have produced various results regarding the SMAP validation and related science questions. The SMAPVEX15 data set has been used for calibration of a hyper-resolution model for soil moisture product validation; development of a multi-scale parameterization approach for surface roughness, and validation of disaggregation of SMAP soil moisture with optical thermal signal. The SMAPVEX16 data set has been already used for studying the spatial upscaling within a pixel with highly heterogeneous soil texture distribution; for understanding the process of radiative transfer at plot scale in relation to field scale and SMAP footprint scale over highly heterogeneous vegetation distribution; for testing a data fusion based soil moisture

  20. Effort, symptom validity testing, performance validity testing and traumatic brain injury.

    PubMed

    Bigler, Erin D

    2014-01-01

    To understand the neurocognitive effects of brain injury, valid neuropsychological test findings are paramount. This review examines the research on what has been referred to a symptom validity testing (SVT). Above a designated cut-score signifies a 'passing' SVT performance which is likely the best indicator of valid neuropsychological test findings. Likewise, substantially below cut-point performance that nears chance or is at chance signifies invalid test performance. Significantly below chance is the sine qua non neuropsychological indicator for malingering. However, the interpretative problems with SVT performance below the cut-point yet far above chance are substantial, as pointed out in this review. This intermediate, border-zone performance on SVT measures is where substantial interpretative challenges exist. Case studies are used to highlight the many areas where additional research is needed. Historical perspectives are reviewed along with the neurobiology of effort. Reasons why performance validity testing (PVT) may be better than the SVT term are reviewed. Advances in neuroimaging techniques may be key in better understanding the meaning of border zone SVT failure. The review demonstrates the problems with rigidity in interpretation with established cut-scores. A better understanding of how certain types of neurological, neuropsychiatric and/or even test conditions may affect SVT performance is needed.

  1. Sns Moderator Poison Design and Experiment Validation of the Moderator Performance

    NASA Astrophysics Data System (ADS)

    Lu, W.; Iverson, E. B.; Ferguson, P. D.; Crabtree, J. A.; Gallmeier, F. X.; Remec, I.; Baxter, D. V.; Lavelle, C. M.

    2009-08-01

    The Spallation Neutron Source (SNS) at Oak Ridge National Laboratory reached 180 kW in August 2007, becoming the brightest pulsed neutron source in the world. At its full power of 1.4 MW, SNS will have thermal neutron fluxes approximately an order of magnitude greater than any existing pulsed spallation source. It thus brings a serious challenge to the lifetime of the moderator poison sheets. The SNS moderators are integrated with the inner reflector plug (IRP) at a cost of $2 million a piece. A replacement of the IRP presents a significant drawback to the facility due to the activation and the operation cost. Although there are many factors limiting the lifetime of the IRP, such as radiation damage to the structural material and helium production in beryllium, the limiting factor is the lifetime of the moderator poison sheets. The current operating target system of SNS was built with thick Gd poison sheets at a projected lifetime of 3 years. A recent design based on the MCNPX calculation proposed to replace the Gd poison sheets with even thicker Cd poison sheets, aiming to extend the poison sheet lifetime from 3 to 4 years accompanied by an approximate 5% gain of the moderator performance. An experiment was carried out to verify the calculated moderator performance at the Low Energy Neutron Source (LENS), Indiana University, where the spectra of two polyethylene moderators were measured. The moderators are Cd-decoupled and are poisoned with 0.8 mm Gd and 1.2 mm Cd, respectively. The preliminary analysis of the experiment data shows that the characteristics of the measured spectra of the Gd- and Cd-poisoned moderators agree well with what the calculation predicted. A better moderator performance is observed in the Cd-poisoned moderator. The measured ratio of Cd over Gd on the moderator performance is in a reasonable agreement with the calculation. Further investigation is underway for a better understanding of the difference between the experiment and the

  2. PSI-Center Simulations of Validation Platform Experiments

    NASA Astrophysics Data System (ADS)

    Nelson, B. A.; Akcay, C.; Glasser, A. H.; Hansen, C. J.; Jarboe, T. R.; Marklin, G. J.; Milroy, R. D.; Morgan, K. D.; Norgaard, P. C.; Shumlak, U.; Victor, B. S.; Sovinec, C. R.; O'Bryan, J. B.; Held, E. D.; Ji, J.-Y.; Lukin, V. S.

    2013-10-01

    The Plasma Science and Innovation Center (PSI-Center - http://www.psicenter.org) supports collaborating validation platform experiments with extended MHD simulations. Collaborators include the Bellan Plasma Group (Caltech), CTH (Auburn U), FRX-L (Los Alamos National Laboratory), HIT-SI (U Wash - UW), LTX (PPPL), MAST (Culham), Pegasus (U Wisc-Madison), PHD/ELF (UW/MSNW), SSX (Swarthmore College), TCSU (UW), and ZaP/ZaP-HD (UW). Modifications have been made to the NIMROD, HiFi, and PSI-Tet codes to specifically model these experiments, including mesh generation/refinement, non-local closures, appropriate boundary conditions (external fields, insulating BCs, etc.), and kinetic and neutral particle interactions. The PSI-Center is exploring application of validation metrics between experimental data and simulations results. Biorthogonal decomposition is proving to be a powerful method to compare global temporal and spatial structures for validation. Results from these simulation and validation studies, as well as an overview of the PSI-Center status will be presented.

  3. Validating the performance of correlated fission multiplicity implementation in radiation transport codes with subcritical neutron multiplication benchmark experiments

    DOE PAGES

    Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson; ...

    2018-06-14

    Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less

  4. Validating the performance of correlated fission multiplicity implementation in radiation transport codes with subcritical neutron multiplication benchmark experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson

    Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less

  5. The Effect of Aptitude and Experience on Mechanical Job Performance.

    ERIC Educational Resources Information Center

    Mayberry, Paul W.; Carey, Neil B.

    1997-01-01

    The validity of the Armed Services Vocational Aptitude Battery (ASVAB) in predicting mechanical job performance was studied with 891 automotive and 522 helicopter mechanics. The mechanical maintenance component of the ASVAB predicted hands-on performance, job knowledge, and training grades quite well, but experience was more predictive of…

  6. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  7. Earth Radiation Budget Experiment (ERBE) validation

    NASA Technical Reports Server (NTRS)

    Barkstrom, Bruce R.; Harrison, Edwin F.; Smith, G. Louis; Green, Richard N.; Kibler, James F.; Cess, Robert D.

    1990-01-01

    During the past 4 years, data from the Earth Radiation Budget Experiment (ERBE) have been undergoing detailed examination. There is no direct source of groundtruth for the radiation budget. Thus, this validation effort has had to rely heavily upon intercomparisons between different types of measurements. The ERBE SCIENCE Team chose 10 measures of agreement as validation criteria. Late in August 1988, the Team agreed that the data met these conditions. As a result, the final, monthly averaged data products are being archived. These products, their validation, and some results for January 1986 are described. Information is provided on obtaining the data from the archive.

  8. Construct validity of individual and summary performance metrics associated with a computer-based laparoscopic simulator.

    PubMed

    Rivard, Justin D; Vergis, Ashley S; Unger, Bertram J; Hardy, Krista M; Andrew, Chris G; Gillman, Lawrence M; Park, Jason

    2014-06-01

    Computer-based surgical simulators capture a multitude of metrics based on different aspects of performance, such as speed, accuracy, and movement efficiency. However, without rigorous assessment, it may be unclear whether all, some, or none of these metrics actually reflect technical skill, which can compromise educational efforts on these simulators. We assessed the construct validity of individual performance metrics on the LapVR simulator (Immersion Medical, San Jose, CA, USA) and used these data to create task-specific summary metrics. Medical students with no prior laparoscopic experience (novices, N = 12), junior surgical residents with some laparoscopic experience (intermediates, N = 12), and experienced surgeons (experts, N = 11) all completed three repetitions of four LapVR simulator tasks. The tasks included three basic skills (peg transfer, cutting, clipping) and one procedural skill (adhesiolysis). We selected 36 individual metrics on the four tasks that assessed six different aspects of performance, including speed, motion path length, respect for tissue, accuracy, task-specific errors, and successful task completion. Four of seven individual metrics assessed for peg transfer, six of ten metrics for cutting, four of nine metrics for clipping, and three of ten metrics for adhesiolysis discriminated between experience levels. Time and motion path length were significant on all four tasks. We used the validated individual metrics to create summary equations for each task, which successfully distinguished between the different experience levels. Educators should maintain some skepticism when reviewing the plethora of metrics captured by computer-based simulators, as some but not all are valid. We showed the construct validity of a limited number of individual metrics and developed summary metrics for the LapVR. The summary metrics provide a succinct way of assessing skill with a single metric for each task, but require further validation.

  9. Initial Retrieval Validation from the Joint Airborne IASI Validation Experiment (JAIVEx)

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Liu, Xu; Smith, WIlliam L.; Larar, Allen M.; Taylor, Jonathan P.; Revercomb, Henry E.; Mango, Stephen A.; Schluessel, Peter; Calbet, Xavier

    2007-01-01

    The Joint Airborne IASI Validation Experiment (JAIVEx) was conducted during April 2007 mainly for validation of the Infrared Atmospheric Sounding Interferometer (IASI) on the MetOp satellite, but also included a strong component focusing on validation of the Atmospheric InfraRed Sounder (AIRS) aboard the AQUA satellite. The cross validation of IASI and AIRS is important for the joint use of their data in the global Numerical Weather Prediction process. Initial inter-comparisons of geophysical products have been conducted from different aspects, such as using different measurements from airborne ultraspectral Fourier transform spectrometers (specifically, the NPOESS Airborne Sounder Testbed Interferometer (NAST-I) and the Scanning-High resolution Interferometer Sounder (S-HIS) aboard the NASA WB-57 aircraft), UK Facility for Airborne Atmospheric Measurements (FAAM) BAe146-301 aircraft insitu instruments, dedicated dropsondes, radiosondes, and ground based Raman Lidar. An overview of the JAIVEx retrieval validation plan and some initial results of this field campaign are presented.

  10. CFD validation experiments at the Lockheed-Georgia Company

    NASA Technical Reports Server (NTRS)

    Malone, John B.; Thomas, Andrew S. W.

    1987-01-01

    Information is given in viewgraph form on computational fluid dynamics (CFD) validation experiments at the Lockheed-Georgia Company. Topics covered include validation experiments on a generic fighter configuration, a transport configuration, and a generic hypersonic vehicle configuration; computational procedures; surface and pressure measurements on wings; laser velocimeter measurements of a multi-element airfoil system; the flowfield around a stiffened airfoil; laser velocimeter surveys of a circulation control wing; circulation control for high lift; and high angle of attack aerodynamic evaluations.

  11. Panamanian women׳s experience of vaginal examination in labour: A questionnaire validation.

    PubMed

    Bonilla-Escobar, Francisco J; Ortega-Lenis, Delia; Rojas-Mirquez, Johanna C; Ortega-Loubon, Christian

    2016-05-01

    to validate a tool that allows healthcare providers to obtain accurate information regarding Panamanian women׳s thoughts and feelings about vaginal examination during labour that can be used in other Latin-American countries. validation study based on a database from a cross-sectional study carried out in two tertiary care hospitals in Panama City, Panama. Women in the immediate postpartum period who had spontaneous labour onset and uncomplicated deliveries were included in the study from April to August 2008. Researchers used a survey designed by Lewin et al. that included 20 questions related to a patient׳s experience during a vaginal examination. five constructs (factors) related to a patient׳s experience of vaginal examination during labour were identified: Approval (Alpha Cronbach׳s 0.72), Perception (0.67), Rejection (0.40), Consent (0.51), and Stress (0.20). it was demonstrated the validity of the scale and its constructs used to obtain information related to vaginal examination during labour, including patients' experiences with examination and healthcare staff performance. utilisation of the scale will allow institutions to identify items that need improvement and address these areas in order to promote the best care for patients in labour. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Validation and scaling of soil moisture in a semi-arid environment: SMAP Validation Experiment 2015 (SMAPVEX15)

    USDA-ARS?s Scientific Manuscript database

    The NASA SMAP (Soil Moisture Active Passive) mission conducted the SMAP Validation Experiment 2015 (SMAPVEX15) in order to support the calibration and validation activities of SMAP soil moisture data product.The main goals of the experiment were to address issues regarding the spatial disaggregation...

  13. Airborne Observations and Satellite Validation: INTEX-A Experience and INTEX-B Plans

    NASA Technical Reports Server (NTRS)

    Crawford, James H.; Singh, Hanwant B.; Brune, William H.; Jacob, Daniel J.

    2005-01-01

    Intercontinental Chemical Transport Experiment (INTEX; http://cloudl.arc.nasa.gov) is an ongoing two-phase integrated atmospheric field experiment being performed over North America (NA). Its first phase (INTEX-A) was performed in the summer of 2004 and the second phase (INTEX-B) is planned for the early spring of 2006. The main goal of INTEX-NA is to understand the transport and transformation of gases and aerosols on transcontinental/intercontinental scales and to assess their impact on air quality and climate. Central to achieving this goal is the need to relate space-based observations with those from airborne and surface platforms. During INTEX-A, NASA s DC-8 was joined by some dozen other aircraft from a large number of European and North American partners to focus on the outflow of pollution from NA to the Atlantic. Several instances of Asian pollution over NA were also encountered. INTEX-A flight planning extensively relied on satellite observations and in turn Satellite validation (Terra, Aqua, and Envisat) was given high priority. Over 20 validation profiles were successfully carried out. DC-8 sampling of smoke from Alaskan fires and formaldehyde over forested regions, and simultaneous satellite observations of these provided excellent opportunities for the interplay of these platforms. The planning for INTEX-5 is currently underway, and a vast majority of "standard" and "research" products to be retrieved from Aura instruments will be measured during INTEX-B throughout the troposphere. INTEX-B will focus on the inflow of pollution from Asia to North America and validation of satellite observations with emphasis on Aura. Several national and international partners are expected to coordinate activities with INTEX-B, and we expect its scope to expand in the coming months. An important new development involves partnership with an NSF-sponsored campaign called MIRAGE (Megacity Impacts on Regional and Global Environments- Mexico City Pollution Outflow Field

  14. Changes and Issues in the Validation of Experience

    ERIC Educational Resources Information Center

    Triby, Emmanuel

    2005-01-01

    This article analyses the main changes in the rules for validating experience in France and of what they mean for society. It goes on to consider university validation practices. The way in which this system is evolving offers a chance to identify the issues involved for the economy and for society, with particular attention to the expected…

  15. An intercomparison of a large ensemble of statistical downscaling methods for Europe: Overall results from the VALUE perfect predictor cross-validation experiment

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data

  16. Construct validity of the individual work performance questionnaire.

    PubMed

    Koopmans, Linda; Bernaards, Claire M; Hildebrandt, Vincent H; de Vet, Henrica C W; van der Beek, Allard J

    2014-03-01

    To examine the construct validity of the Individual Work Performance Questionnaire (IWPQ). A total of 1424 Dutch workers from three occupational sectors (blue, pink, and white collar) participated in the study. First, IWPQ scores were correlated with related constructs (convergent validity). Second, differences between known groups were tested (discriminative validity). First, IWPQ scores correlated weakly to moderately with absolute and relative presenteeism, and work engagement. Second, significant differences in IWPQ scores were observed for workers differing in job satisfaction, and workers differing in health. Overall, the results indicate acceptable construct validity of the IWPQ. Researchers are provided with a reliable and valid instrument to measure individual work performance comprehensively and generically, among workers from different occupational sectors, with and without health problems.

  17. Reliability and validity of the neurorehabilitation experience questionnaire for inpatients.

    PubMed

    Kneebone, Ian I; Hull, Samantha L; McGurk, Rhona; Cropley, Mark

    2012-09-01

    Patient-centered measures of the inpatient neurorehabilitation experience are needed to assess services. The objective of this study was to develop a valid and reliable Neurorehabilitation Experience Questionnaire (NREQ) to assess whether neurorehabilitation inpatients experience service elements important to them. Based on the themes established in prior qualitative research, adopting questions from established inventories and using a literature review, a draft version of the NREQ was generated. Focus groups and interviews were conducted with 9 patients and 26 staff from neurological rehabilitation units to establish face validity. Then, 70 patients were recruited to complete the NREQ to ascertain reliability (internal and test-retest) and concurrent validity. On the basis of the face validity testing, several modifications were made to the draft version of the NREQ. Subsequently, internal reliability (time 1 α = .76, time 2 α = .80), test retest reliability (r = 0.70), and concurrent validity (r = 0.32 and r = 0.56) were established for the revised version. Whereas responses were associated with positive mood (r = 0.30), they appeared not to be influenced by negative mood, age, education, length of stay, sex, functional independence, or whether a participant had been a patient on a unit previously. Preliminary validation of the NREQ suggests promise for use with its target population.

  18. Methodology and issues of integral experiments selection for nuclear data validation

    NASA Astrophysics Data System (ADS)

    Tatiana, Ivanova; Ivanov, Evgeny; Hill, Ian

    2017-09-01

    Nuclear data validation involves a large suite of Integral Experiments (IEs) for criticality, reactor physics and dosimetry applications. [1] Often benchmarks are taken from international Handbooks. [2, 3] Depending on the application, IEs have different degrees of usefulness in validation, and usually the use of a single benchmark is not advised; indeed, it may lead to erroneous interpretation and results. [1] This work aims at quantifying the importance of benchmarks used in application dependent cross section validation. The approach is based on well-known General Linear Least Squared Method (GLLSM) extended to establish biases and uncertainties for given cross sections (within a given energy interval). The statistical treatment results in a vector of weighting factors for the integral benchmarks. These factors characterize the value added by a benchmark for nuclear data validation for the given application. The methodology is illustrated by one example, selecting benchmarks for 239Pu cross section validation. The studies were performed in the framework of Subgroup 39 (Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files) established at the Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD).

  19. Methodology for turbulence code validation: Quantification of simulation-experiment agreement and application to the TORPEX experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricci, Paolo; Theiler, C.; Fasoli, A.

    A methodology for plasma turbulence code validation is discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The present work extends the analysis carried out in a previous paper [P. Ricci et al., Phys. Plasmas 16, 055703 (2009)] where the validation observables were introduced. Here, it is discussed how to quantify the agreement between experiments and simulations with respect to each observable, how to define a metric to evaluate this agreement globally, and - finally - how to assess the quality of a validation procedure. The methodology is then applied to the simulation of the basic plasmamore » physics experiment TORPEX [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulation models.« less

  20. Numerical Investigation of the Performance of a Supersonic Combustion Chamber and Comparison with Experiments

    NASA Astrophysics Data System (ADS)

    Banica, M. C.; Chun, J.; Scheuermann, T.; Weigand, B.; Wolfersdorf, J. v.

    2009-01-01

    Scramjet powered vehicles can decrease costs for access to space but substantial obstacles still exist in their realization. For example, experiments in the relevant Mach number regime are difficult to perform and flight testing is expensive. Therefore, numerical methods are often employed for system layout but they require validation against experimental data. Here, we validate the commercial code CFD++ against experimental results for hydrogen combustion in the supersonic combustion facility of the Institute of Aerospace Thermodynamics (ITLR) at the Universität Stuttgart. Fuel is injected through a lobed a strut injector, which provides rapid mixing. Our numerical data shows reasonable agreement with experiments. We further investigate effects of varying equivalence ratios on several important performance parameters.

  1. The Grand Banks ERS-1 SAR wave spectra validation experiment

    NASA Technical Reports Server (NTRS)

    Vachon, P. W.; Dobson, F. W.; Smith, S. D.; Anderson, R. J.; Buckley, J. R.; Allingham, M.; Vandemark, D.; Walsh, E. J.; Khandekar, M.; Lalbeharry, R.

    1993-01-01

    As part of the ERS-1 validation program, the ERS-1 Synthetic Aperture Radar (SAR) wave spectra validation experiment was carried out over the Grand Banks of Newfoundland (Canada) in Nov. 1991. The principal objective of the experiment was to obtain complete sets of wind and wave data from a variety of calibrated instruments to validate SAR measurements of ocean wave spectra. The field program activities are described and the rather complex wind and wave conditions which were observed are summarized. Spectral comparisons with ERS-1 SAR image spectra are provided. The ERS-1 SAR is shown to have measured swell and range traveling wind seas, but did not measure azimuth traveling wind seas at any time during the experiment. Results of velocity bunching forward mapping and new measurements of the relationship between wind stress and sea state are also shown.

  2. Observations with the ROWS instrument during the Grand Banks calibration/validation experiments

    NASA Technical Reports Server (NTRS)

    Vandemark, D.; Chapron, B.

    1994-01-01

    As part of a global program to validate the ocean surface sensors on board ERS-1, a joint experiment on the Grand Banks of Newfoundland was carried out in Nov. 1991. The principal objective was to provide a field validation of ERS-1 Synthetic Aperture Radar (SAR) measurement of ocean surface structure. The NASA-P3 aircraft measurements made during this experiment provide independent measurements of the ocean surface along the validation swath. The Radar Ocean Wave Spectrometer (ROWS) is a radar sensor designed to measure direction of the long wave components using spectral analysis of the tilt induced radar backscatter modulation. This technique greatly differs from SAR and thus, provides a unique set of measurements for use in evaluating SAR performance. Also, an altimeter channel in the ROWS gives simultaneous information on the surface wave height and radar mean square slope parameter. The sets of geophysical parameters (wind speed, significant wave height, directional spectrum) are used to study the SAR's ability to accurately measure ocean gravity waves. The known distortion imposed on the true directional spectrum by the SAR imaging mechanism is discussed in light of the direct comparisons between ERS-1 SAR, airborne Canadian Center for Remote Sensing (CCRS) SAR, and ROWS spectra and the use of the nonlinear ocean SAR transform.

  3. Prognostics of Power Electronics, Methods and Validation Experiments

    NASA Technical Reports Server (NTRS)

    Kulkarni, Chetan S.; Celaya, Jose R.; Biswas, Gautam; Goebel, Kai

    2012-01-01

    Abstract Failure of electronic devices is a concern for future electric aircrafts that will see an increase of electronics to drive and control safety-critical equipment throughout the aircraft. As a result, investigation of precursors to failure in electronics and prediction of remaining life of electronic components is of key importance. DC-DC power converters are power electronics systems employed typically as sourcing elements for avionics equipment. Current research efforts in prognostics for these power systems focuses on the identification of failure mechanisms and the development of accelerated aging methodologies and systems to accelerate the aging process of test devices, while continuously measuring key electrical and thermal parameters. Preliminary model-based prognostics algorithms have been developed making use of empirical degradation models and physics-inspired degradation model with focus on key components like electrolytic capacitors and power MOSFETs (metal-oxide-semiconductor-field-effect-transistor). This paper presents current results on the development of validation methods for prognostics algorithms of power electrolytic capacitors. Particularly, in the use of accelerated aging systems for algorithm validation. Validation of prognostics algorithms present difficulties in practice due to the lack of run-to-failure experiments in deployed systems. By using accelerated experiments, we circumvent this problem in order to define initial validation activities.

  4. Validation Experiences and Persistence among Urban Community College Students

    ERIC Educational Resources Information Center

    Barnett, Elisabeth A.

    2007-01-01

    The purpose of this research was to examine the extent to which urban community college students' experiences with validation by faculty contributed to their sense of integration in college and whether this, in turn, contributed to their intent to persist in college. This study focused on urban community college students' validating experiences…

  5. Validating vignette and conjoint survey experiments against real-world behavior

    PubMed Central

    Hainmueller, Jens; Hangartner, Dominik; Yamamoto, Teppei

    2015-01-01

    Survey experiments, like vignette and conjoint analyses, are widely used in the social sciences to elicit stated preferences and study how humans make multidimensional choices. However, there is a paucity of research on the external validity of these methods that examines whether the determinants that explain hypothetical choices made by survey respondents match the determinants that explain what subjects actually do when making similar choices in real-world situations. This study compares results from conjoint and vignette analyses on which immigrant attributes generate support for naturalization with closely corresponding behavioral data from a natural experiment in Switzerland, where some municipalities used referendums to decide on the citizenship applications of foreign residents. Using a representative sample from the same population and the official descriptions of applicant characteristics that voters received before each referendum as a behavioral benchmark, we find that the effects of the applicant attributes estimated from the survey experiments perform remarkably well in recovering the effects of the same attributes in the behavioral benchmark. We also find important differences in the relative performances of the different designs. Overall, the paired conjoint design, where respondents evaluate two immigrants side by side, comes closest to the behavioral benchmark; on average, its estimates are within 2% percentage points of the effects in the behavioral benchmark. PMID:25646415

  6. Embedded performance validity testing in neuropsychological assessment: Potential clinical tools.

    PubMed

    Rickards, Tyler A; Cranston, Christopher C; Touradji, Pegah; Bechtold, Kathleen T

    2018-01-01

    The article aims to suggest clinically-useful tools in neuropsychological assessment for efficient use of embedded measures of performance validity. To accomplish this, we integrated available validity-related and statistical research from the literature, consensus statements, and survey-based data from practicing neuropsychologists. We provide recommendations for use of 1) Cutoffs for embedded performance validity tests including Reliable Digit Span, California Verbal Learning Test (Second Edition) Forced Choice Recognition, Rey-Osterrieth Complex Figure Test Combination Score, Wisconsin Card Sorting Test Failure to Maintain Set, and the Finger Tapping Test; 2) Selecting number of performance validity measures to administer in an assessment; and 3) Hypothetical clinical decision-making models for use of performance validity testing in a neuropsychological assessment collectively considering behavior, patient reporting, and data indicating invalid or noncredible performance. Performance validity testing helps inform the clinician about an individual's general approach to tasks: response to failure, task engagement and persistence, compliance with task demands. Data-driven clinical suggestions provide a resource to clinicians and to instigate conversation within the field to make more uniform, testable decisions to further the discussion, and guide future research in this area.

  7. Validating models of target acquisition performance in the dismounted soldier context

    NASA Astrophysics Data System (ADS)

    Glaholt, Mackenzie G.; Wong, Rachel K.; Hollands, Justin G.

    2018-04-01

    The problem of predicting real-world operator performance with digital imaging devices is of great interest within the military and commercial domains. There are several approaches to this problem, including: field trials with imaging devices, laboratory experiments using imagery captured from these devices, and models that predict human performance based on imaging device parameters. The modeling approach is desirable, as both field trials and laboratory experiments are costly and time-consuming. However, the data from these experiments is required for model validation. Here we considered this problem in the context of dismounted soldiering, for which detection and identification of human targets are essential tasks. Human performance data were obtained for two-alternative detection and identification decisions in a laboratory experiment in which photographs of human targets were presented on a computer monitor and the images were digitally magnified to simulate range-to-target. We then compared the predictions of different performance models within the NV-IPM software package: Targeting Task Performance (TTP) metric model and the Johnson model. We also introduced a modification to the TTP metric computation that incorporates an additional correction for target angular size. We examined model predictions using NV-IPM default values for a critical model constant, V50, and we also considered predictions when this value was optimized to fit the behavioral data. When using default values, certain model versions produced a reasonably close fit to the human performance data in the detection task, while for the identification task all models substantially overestimated performance. When using fitted V50 values the models produced improved predictions, though the slopes of the performance functions were still shallow compared to the behavioral data. These findings are discussed in relation to the models' designs and parameters, and the characteristics of the behavioral

  8. NEXT Performance Curve Analysis and Validation

    NASA Technical Reports Server (NTRS)

    Saripalli, Pratik; Cardiff, Eric; Englander, Jacob

    2016-01-01

    Performance curves of the NEXT thruster are highly important in determining the thruster's ability in performing towards mission-specific goals. New performance curves are proposed and examined here. The Evolutionary Mission Trajectory Generator (EMTG) is used to verify variations in mission solutions based on both available thruster curves and the new curves generated. Furthermore, variations in BOL and EOL curves are also examined. Mission design results shown here validate the use of EMTG and the new performance curves.

  9. Development and validation of a music performance anxiety inventory for gifted adolescent musicians.

    PubMed

    Osborne, Margaret S; Kenny, Dianna T

    2005-01-01

    Music performance anxiety (MPA) is a distressing experience for musicians of all ages, yet the empirical investigation of MPA in adolescents has received little attention to date. No measures specifically targeting MPA in adolescents have been empirically validated. This article presents findings of an initial study into the psychometric properties and validation of the Music Performance Anxiety Inventory for Adolescents (MPAI-A), a new self-report measure of MPA for this group. Data from 381 elite young musicians aged 12-19 years was used to investigate the factor structure, internal reliability, construct and divergent validity of the MPAI-A. Cronbach's alpha for the full measure was .91. Factor analysis identified three factors, which together accounted for 53% of the variance. Construct validity was demonstrated by significant positive relationships with social phobia (measured using the Social Phobia Anxiety Inventory [Beidel, D. C., Turner, S. M., & Morris, T. L. (1995). A new inventory to assess childhood social anxiety and phobia: The Social Phobia and Anxiety Inventory for Children. Psychological Assessment, 7(1), 73-79; Beidel, D. C., Turner, S. M., & Morris, T. L. (1998). Social Phobia and Anxiety Inventory for Children (SPAI-C). North Tonawanda, NY: Multi-Health Systems Inc.]) and trait anxiety (measured using the State Trait Anxiety Inventory [Spielberger, C. D. (1983). State-Trait Anxiety Inventory STAI (Form Y). Palo Alto, CA: Consulting Psychologists Press, Inc.]). The MPAI-A demonstrated convergent validity by a moderate to strong positive correlation with an adult measure of MPA. Discriminant validity was established by a weaker positive relationship with depression, and no relationship with externalizing behavior problems. It is hoped that the MPAI-A, as the first empirically validated measure of adolescent musicians' performance anxiety, will enhance and promote phenomenological and treatment research in this area.

  10. Validation of the revised Mystical Experience Questionnaire in experimental sessions with psilocybin.

    PubMed

    Barrett, Frederick S; Johnson, Matthew W; Griffiths, Roland R

    2015-11-01

    The 30-item revised Mystical Experience Questionnaire (MEQ30) was previously developed within an online survey of mystical-type experiences occasioned by psilocybin-containing mushrooms. The rated experiences occurred on average eight years before completion of the questionnaire. The current paper validates the MEQ30 using data from experimental studies with controlled doses of psilocybin. Data were pooled and analyzed from five laboratory experiments in which participants (n=184) received a moderate to high oral dose of psilocybin (at least 20 mg/70 kg). Results of confirmatory factor analysis demonstrate the reliability and internal validity of the MEQ30. Structural equation models demonstrate the external and convergent validity of the MEQ30 by showing that latent variable scores on the MEQ30 positively predict persisting change in attitudes, behavior, and well-being attributed to experiences with psilocybin while controlling for the contribution of the participant-rated intensity of drug effects. These findings support the use of the MEQ30 as an efficient measure of individual mystical experiences. A method to score a "complete mystical experience" that was used in previous versions of the mystical experience questionnaire is validated in the MEQ30, and a stand-alone version of the MEQ30 is provided for use in future research. © The Author(s) 2015.

  11. CFD validation experiments at McDonnell Aircraft Company

    NASA Technical Reports Server (NTRS)

    Verhoff, August

    1987-01-01

    Information is given in viewgraph form on computational fluid dynamics (CFD) validation experiments at McDonnell Aircraft Company. Topics covered include a high speed research model, a supersonic persistence fighter model, a generic fighter wing model, surface grids, force and moment predictions, surface pressure predictions, forebody models with 65 degree clipped delta wings, and the low aspect ratio wing/body experiment.

  12. Evidences of Validity of a Scale for Mapping Professional as Defining Competences and Performance by Brazilian Tutors

    ERIC Educational Resources Information Center

    Coelho, Francisco Antonio, Jr.; Ferreira, Rodrigo Rezende; Paschoal, Tatiane; Faiad, Cristiane; Meneses, Paulo Murce

    2015-01-01

    The purpose of this study was twofold: to assess evidences of construct validity of the Brazilian Scale of Tutors Competences in the field of Open and Distance Learning and to examine if variables such as professional experience, perception of the student´s learning performance and prior experience influence the development of technical and…

  13. 2-D Circulation Control Airfoil Benchmark Experiments Intended for CFD Code Validation

    NASA Technical Reports Server (NTRS)

    Englar, Robert J.; Jones, Gregory S.; Allan, Brian G.; Lin, Johb C.

    2009-01-01

    A current NASA Research Announcement (NRA) project being conducted by Georgia Tech Research Institute (GTRI) personnel and NASA collaborators includes the development of Circulation Control (CC) blown airfoils to improve subsonic aircraft high-lift and cruise performance. The emphasis of this program is the development of CC active flow control concepts for both high-lift augmentation, drag control, and cruise efficiency. A collaboration in this project includes work by NASA research engineers, whereas CFD validation and flow physics experimental research are part of NASA s systematic approach to developing design and optimization tools for CC applications to fixed-wing aircraft. The design space for CESTOL type aircraft is focusing on geometries that depend on advanced flow control technologies that include Circulation Control aerodynamics. The ability to consistently predict advanced aircraft performance requires improvements in design tools to include these advanced concepts. Validation of these tools will be based on experimental methods applied to complex flows that go beyond conventional aircraft modeling techniques. This paper focuses on recent/ongoing benchmark high-lift experiments and CFD efforts intended to provide 2-D CFD validation data sets related to NASA s Cruise Efficient Short Take Off and Landing (CESTOL) study. Both the experimental data and related CFD predictions are discussed.

  14. Validation of the revised Mystical Experience Questionnaire in experimental sessions with psilocybin

    PubMed Central

    Barrett, Frederick S; Johnson, Matthew W; Griffiths, Roland R

    2016-01-01

    The 30-item revised Mystical Experience Questionnaire (MEQ30) was previously developed within an online survey of mystical-type experiences occasioned by psilocybin-containing mushrooms. The rated experiences occurred on average eight years before completion of the questionnaire. The current paper validates the MEQ30 using data from experimental studies with controlled doses of psilocybin. Data were pooled and analyzed from five laboratory experiments in which participants (n=184) received a moderate to high oral dose of psilocybin (at least 20 mg/70 kg). Results of confirmatory factor analysis demonstrate the reliability and internal validity of the MEQ30. Structural equation models demonstrate the external and convergent validity of the MEQ30 by showing that latent variable scores on the MEQ30 positively predict persisting change in attitudes, behavior, and well-being attributed to experiences with psilocybin while controlling for the contribution of the participant-rated intensity of drug effects. These findings support the use of the MEQ30 as an efficient measure of individual mystical experiences. A method to score a “complete mystical experience” that was used in previous versions of the mystical experience questionnaire is validated in the MEQ30, and a stand-alone version of the MEQ30 is provided for use in future research. PMID:26442957

  15. Cross-Validation of Predictor Equations for Armor Crewman Performance

    DTIC Science & Technology

    1980-01-01

    Technical Report 447 CROSS-VALIDATION OF PREDICTOR EQUATIONS FOR ARMOR CREWMAN PERFORMANCE Anthony J. Maitland , Newell K. Eaton, and Janet F. Neft...ORG. REPORT NUMBER Anthony J/ Maitland . Newell K/EatorV. and B OTATO RN UBR. 9- PERFORMING ORGANIZATION NAME AND ADDRESS I0. PROGRAM ELEMENT, PROJECT...Technical Report 447 CROSS-VALIDATION OF PREDICTOR EQUATIONS FOR ARMOR CREWMAN PERFORMANCE Anthony J. Maitland , Newell K. Eaton, Accession For and

  16. CFD Modeling Needs and What Makes a Good Supersonic Combustion Validation Experiment

    NASA Technical Reports Server (NTRS)

    Gaffney, Richard L., Jr.; Cutler, Andrew D.

    2005-01-01

    If a CFD code/model developer is asked what experimental data he wants to validate his code or numerical model, his answer will be: "Everything, everywhere, at all times." Since this is not possible, practical, or even reasonable, the developer must understand what can be measured within the limits imposed by the test article, the test location, the test environment and the available diagnostic equipment. At the same time, it is important for the expermentalist/diagnostician to understand what the CFD developer needs (as opposed to wants) in order to conduct a useful CFD validation experiment. If these needs are not known, it is possible to neglect easily measured quantities at locations needed by the developer, rendering the data set useless for validation purposes. It is also important for the experimentalist/diagnostician to understand what the developer is trying to validate so that the experiment can be designed to isolate (as much as possible) the effects of a particular physical phenomena that is associated with the model to be validated. The probability of a successful validation experiment can be greatly increased if the two groups work together, each understanding the needs and limitations of the other.

  17. Performance Ratings: Designs for Evaluating Their Validity and Accuracy.

    DTIC Science & Technology

    1986-07-01

    ratees with substantial validity and with little bias due to the ethod for rating. Convergent validity and discriminant validity account for approximately...The expanded research design suggests that purpose for the ratings has little influence on the multitrait-multimethod properties of the ratings...Convergent and discriminant validity again account for substantial differences in the ratings of performance. Little method bias is present; both methods of

  18. Construct Validity of Three Clerkship Performance Assessments

    ERIC Educational Resources Information Center

    Lee, Ming; Wimmers, Paul F.

    2010-01-01

    This study examined construct validity of three commonly used clerkship performance assessments: preceptors' evaluations, OSCE-type clinical performance measures, and the NBME [National Board of Medical Examiners] medicine subject examination. Six hundred and eighty-six students taking the inpatient medicine clerkship from 2003 to 2007…

  19. [Validation study of the Depressive Experience Questionnaire].

    PubMed

    Atger, F; Frasson, G; Loas, G; Guibourgé, S; Corcos, M; Perez Diaz, F; Speranza, M; Venisse, J-L; Lang, F; Stephan, Ph; Bizouard, P; Flament, M; Jeammet, Ph

    2003-01-01

    sample (500 female and 160 male undergraduates). Principal component analysis within sex performed on the answers to DEQ confirmed his assumption in identifying two principal depressive dimensions. The first factor involved items that are primarily externally directed and refer to a disturbance of interpersonal relationships (anaclitism); the second factor consists of items that are more internally directed and reflect concerns about self-identity (self-criticism). A third factor emerged, assessing the good functioning of subject and confidence in his resources and capacities (efficacy). Scales derived from these factors have high internal consistency and substantial test-retest reliability. The solutions for men and women were highly congruent. Factor structure has been replicated in several nonclinical and clinical samples, supporting considerable evidence to the construct validity of the DEQ Dependency and Self-criticism scales. An adolescent form of DEQ (DEQ-A) has successively been developed. Factor analysis revealed three factors that were highly congruent in female and male students and with the three factors of the original DEQ. The reliability, internal consistency and validity of DEQ-A indicate that the DEQ-A closely parallels the DEQ, especially in the articulation of Dependency and Self-criticism as two factors in depression. These formulations and clinical observations about the importance of differentiating a depression focused on issues of self-criticism from issues of dependency are consistent with the formulations of others theorists which, from very different theoretical perspectives, posit 2 types of depression, one in which either perceived loss or rejection in social relationships is central and the other in which perceived failure in achievement, guilt or lack of control serves as the precipitant of depression. These 2 types of experiences have been characterized as dominant other and dominant goal , as anxiously attached and compulsively self

  20. EEG-neurofeedback for optimising performance. II: creativity, the performing arts and ecological validity.

    PubMed

    Gruzelier, John H

    2014-07-01

    As a continuation of a review of evidence of the validity of cognitive/affective gains following neurofeedback in healthy participants, including correlations in support of the gains being mediated by feedback learning (Gruzelier, 2014a), the focus here is on the impact on creativity, especially in the performing arts including music, dance and acting. The majority of research involves alpha/theta (A/T), sensory-motor rhythm (SMR) and heart rate variability (HRV) protocols. There is evidence of reliable benefits from A/T training with advanced musicians especially for creative performance, and reliable benefits from both A/T and SMR training for novice music performance in adults and in a school study with children with impact on creativity, communication/presentation and technique. Making the SMR ratio training context ecologically relevant for actors enhanced creativity in stage performance, with added benefits from the more immersive training context. A/T and HRV training have benefitted dancers. The neurofeedback evidence adds to the rapidly accumulating validation of neurofeedback, while performing arts studies offer an opportunity for ecological validity in creativity research for both creative process and product. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. The Intersectionality of Culturally Responsive Teaching and Performance Poetry: Validating Secondary Latino Youth and Their Community

    ERIC Educational Resources Information Center

    Ramirez, Pablo C.; Jimenez-Silva, Margarita

    2015-01-01

    In this article the authors draw from culturally responsive teaching and multicultural education to describe performance poetry as an effective strategy for validating secondary aged Latino youths' lived experiences. Supported by teacher modeling and the incorporation of community poets, students created and shared their own powerful poems that…

  2. Anti-Collision Function Design and Performances of the CNES Formation Flying Experiment on the PRISMA Mission

    NASA Technical Reports Server (NTRS)

    Cayeux, P.; Raballand, F.; Borde, J.; Berges, J.-C.; Meyssignac, B.

    2007-01-01

    Within the framework of a partnership agreement, EADS ASTRIUM has worked since June 2006 for the CNES formation flying experiment on the PRISMA mission. EADS ASTRIUM is responsible for the anti-collision function. This responsibility covers the design and the development of the function as a Matlab/Simulink library, as well as its functional validation and performance assessment. PRISMA is a technology in-orbit testbed mission from the Swedish National Space Board, mainly devoted to formation flying demonstration. PRISMA is made of two micro-satellites that will be launched in 2009 on a quasi-circular SSO at about 700 km of altitude. The CNES FFIORD experiment embedded on PRISMA aims at flight validating an FFRF sensor designed for formation control, and assessing its performances, in preparation to future formation flying missions such as Simbol X; FFIORD aims as well at validating various typical autonomous rendezvous and formation guidance and control algorithms. This paper presents the principles of the collision avoidance function developed by EADS ASTRIUM for FFIORD; three kinds of maneuvers were implemented and are presented in this paper with their performances.

  3. In-Flight Thermal Performance of the Lidar In-Space Technology Experiment

    NASA Technical Reports Server (NTRS)

    Roettker, William

    1995-01-01

    The Lidar In-Space Technology Experiment (LITE) was developed at NASA s Langley Research Center to explore the applications of lidar operated from an orbital platform. As a technology demonstration experiment, LITE was developed to gain experience designing and building future operational orbiting lidar systems. Since LITE was the first lidar system to be flown in space, an important objective was to validate instrument design principles in such areas as thermal control, laser performance, instrument alignment and control, and autonomous operations. Thermal and structural analysis models of the instrument were developed during the design process to predict the behavior of the instrument during its mission. In order to validate those mathematical models, extensive engineering data was recorded during all phases of LITE's mission. This inflight engineering data was compared with preflight predictions and, when required, adjustments to the thermal and structural models were made to more accurately match the instrument s actual behavior. The results of this process for the thermal analysis and design of LITE are presented in this paper.

  4. Policy and Validity Prospects for Performance-Based Assessment.

    ERIC Educational Resources Information Center

    Baker, Eva L.; And Others

    1994-01-01

    This article describes performance-based assessment as expounded by its proponents, comments on these conceptions, reviews evidence regarding the technical quality of performance-based assessment, and considers its validity under various policy options. (JDD)

  5. Validation of RNAi Silencing Efficiency Using Gene Array Data shows 18.5% Failure Rate across 429 Independent Experiments.

    PubMed

    Munkácsy, Gyöngyi; Sztupinszki, Zsófia; Herman, Péter; Bán, Bence; Pénzváltó, Zsófia; Szarvas, Nóra; Győrffy, Balázs

    2016-09-27

    No independent cross-validation of success rate for studies utilizing small interfering RNA (siRNA) for gene silencing has been completed before. To assess the influence of experimental parameters like cell line, transfection technique, validation method, and type of control, we have to validate these in a large set of studies. We utilized gene chip data published for siRNA experiments to assess success rate and to compare methods used in these experiments. We searched NCBI GEO for samples with whole transcriptome analysis before and after gene silencing and evaluated the efficiency for the target and off-target genes using the array-based expression data. Wilcoxon signed-rank test was used to assess silencing efficacy and Kruskal-Wallis tests and Spearman rank correlation were used to evaluate study parameters. All together 1,643 samples representing 429 experiments published in 207 studies were evaluated. The fold change (FC) of down-regulation of the target gene was above 0.7 in 18.5% and was above 0.5 in 38.7% of experiments. Silencing efficiency was lowest in MCF7 and highest in SW480 cells (FC = 0.59 and FC = 0.30, respectively, P = 9.3E-06). Studies utilizing Western blot for validation performed better than those with quantitative polymerase chain reaction (qPCR) or microarray (FC = 0.43, FC = 0.47, and FC = 0.55, respectively, P = 2.8E-04). There was no correlation between type of control, transfection method, publication year, and silencing efficiency. Although gene silencing is a robust feature successfully cross-validated in the majority of experiments, efficiency remained insufficient in a significant proportion of studies. Selection of cell line model and validation method had the highest influence on silencing proficiency.

  6. Electrolysis Performance Improvement Concept Study (EPICS) flight experiment phase C/D

    NASA Technical Reports Server (NTRS)

    Schubert, F. H.; Lee, M. G.

    1995-01-01

    The overall purpose of the Electrolysis Performance Improvement Concept Study flight experiment is to demonstrate and validate in a microgravity environment the Static Feed Electrolyzer concept as well as investigate the effect of microgravity on water electrolysis performance. The scope of the experiment includes variations in microstructural characteristics of electrodes and current densities in a static feed electrolysis cell configuration. The results of the flight experiment will be used to improve efficiency of the static feed electrolysis process and other electrochemical regenerative life support processes by reducing power and expanding the operational range. Specific technologies that will benefit include water electrolysis for propulsion, energy storage, life support, extravehicular activity, in-space manufacturing and in-space science in addition to other electrochemical regenerative life support technologies such as electrochemical carbon dioxide and oxygen separation, electrochemical oxygen compression and water vapor electrolysis. The Electrolysis Performance Improvement Concept Study flight experiment design incorporates two primary hardware assemblies: the Mechanical/Electrochemical Assembly and the Control/Monitor Instrumentation. The Mechanical/Electrochemical Assembly contains three separate integrated electrolysis cells along with supporting pressure and temperature control components. The Control/Monitor Instrumentation controls the operation of the experiment via the Mechanical/Electrochemical Assembly components and provides for monitoring and control of critical parameters and storage of experimental data.

  7. Performance Validation Approach for the GTX Air-Breathing Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Trefny, Charles J.; Roche, Joseph M.

    2002-01-01

    The primary objective of the GTX effort is to determine whether or not air-breathing propulsion can enable a launch vehicle to achieve orbit in a single stage. Structural weight, vehicle aerodynamics, and propulsion performance must be accurately known over the entire flight trajectory in order to make a credible assessment. Structural, aerodynamic, and propulsion parameters are strongly interdependent, which necessitates a system approach to design, evaluation, and optimization of a single-stage-to-orbit concept. The GTX reference vehicle serves this purpose, by allowing design, development, and validation of components and subsystems in a system context. The reference vehicle configuration (including propulsion) was carefully chosen so as to provide high potential for structural and volumetric efficiency, and to allow the high specific impulse of air-breathing propulsion cycles to be exploited. Minor evolution of the configuration has occurred as analytical and experimental results have become available. With this development process comes increasing validation of the weight and performance levels used in system performance determination. This paper presents an overview of the GTX reference vehicle and the approach to its performance validation. Subscale test rigs and numerical studies used to develop and validate component performance levels and unit structural weights are outlined. The sensitivity of the equivalent, effective specific impulse to key propulsion component efficiencies is presented. The role of flight demonstration in development and validation is discussed.

  8. Validation Experiences and Persistence among Community College Students

    ERIC Educational Resources Information Center

    Barnett, Elisabeth A.

    2011-01-01

    The purpose of this correlational research was to examine the extent to which community college students' experiences with validation by faculty (Rendon, 1994, 2002) predicted: (a) their sense of integration, and (b) their intent to persist. The research was designed as an elaboration of constructs within Tinto's (1993) Longitudinal Model of…

  9. A design of experiments approach to validation sampling for logistic regression modeling with error-prone medical records.

    PubMed

    Ouyang, Liwen; Apley, Daniel W; Mehrotra, Sanjay

    2016-04-01

    Electronic medical record (EMR) databases offer significant potential for developing clinical hypotheses and identifying disease risk associations by fitting statistical models that capture the relationship between a binary response variable and a set of predictor variables that represent clinical, phenotypical, and demographic data for the patient. However, EMR response data may be error prone for a variety of reasons. Performing a manual chart review to validate data accuracy is time consuming, which limits the number of chart reviews in a large database. The authors' objective is to develop a new design-of-experiments-based systematic chart validation and review (DSCVR) approach that is more powerful than the random validation sampling used in existing approaches. The DSCVR approach judiciously and efficiently selects the cases to validate (i.e., validate whether the response values are correct for those cases) for maximum information content, based only on their predictor variable values. The final predictive model will be fit using only the validation sample, ignoring the remainder of the unvalidated and unreliable error-prone data. A Fisher information based D-optimality criterion is used, and an algorithm for optimizing it is developed. The authors' method is tested in a simulation comparison that is based on a sudden cardiac arrest case study with 23 041 patients' records. This DSCVR approach, using the Fisher information based D-optimality criterion, results in a fitted model with much better predictive performance, as measured by the receiver operating characteristic curve and the accuracy in predicting whether a patient will experience the event, than a model fitted using a random validation sample. The simulation comparisons demonstrate that this DSCVR approach can produce predictive models that are significantly better than those produced from random validation sampling, especially when the event rate is low. © The Author 2015. Published by Oxford

  10. The Development of the Functional Literacy Experience Scale Based upon Ecological Theory (FLESBUET) and Validity-Reliability Study

    ERIC Educational Resources Information Center

    Özenç, Emine Gül; Dogan, M. Cihangir

    2014-01-01

    This study aims to perform a validity-reliability test by developing the Functional Literacy Experience Scale based upon Ecological Theory (FLESBUET) for primary education students. The study group includes 209 fifth grade students at Sabri Taskin Primary School in the Kartal District of Istanbul, Turkey during the 2010-2011 academic year.…

  11. EXCALIBUR-at-CALIBAN: a neutron transmission experiment for {sup 238}U(n,n'{sub continuum}γ) nuclear data validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernard, David; Leconte, Pierre; Destouches, Christophe

    2015-07-01

    Two recent papers justified a new experimental program to give a new basis for the validation of {sup 238}U nuclear data, namely neutron induced inelastic scattering and transport codes at neutron fission energies. The general idea is to perform a neutron transmission experiment through natural uranium material. As shown by Hans Bethe, neutron transmissions measured by dosimetric responses are linked to inelastic cross sections. This paper describes the principle and the results of such an experience called EXCALIBUR performed recently (January and October 2014) at the CALIBAN reactor facility. (authors)

  12. Performance of Landslide-HySEA tsunami model for NTHMP benchmarking validation process

    NASA Astrophysics Data System (ADS)

    Macias, Jorge

    2017-04-01

    In its FY2009 Strategic Plan, the NTHMP required that all numerical tsunami inundation models be verified as accurate and consistent through a model benchmarking process. This was completed in 2011, but only for seismic tsunami sources and in a limited manner for idealized solid underwater landslides. Recent work by various NTHMP states, however, has shown that landslide tsunami hazard may be dominant along significant parts of the US coastline, as compared to hazards from other tsunamigenic sources. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory date sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. The Landslide-HySEA model has participated in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017. The aim of this presentation is to show some of the numerical results obtained for Landslide-HySEA in the framework of this benchmarking validation/verification effort. Acknowledgements. This research has been partially supported by the Junta de Andalucía research project TESELA (P11-RNM7069), the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and Universidad de Málaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  13. Validating An Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments

    NASA Astrophysics Data System (ADS)

    Catanzarite, Joseph; Burke, Christopher J.; Li, Jie; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    The Kepler Mission is developing an Analytic Completeness Model (ACM) to estimate detection completeness contours as a function of exoplanet radius and period for each target star. Accurate completeness contours are necessary for robust estimation of exoplanet occurrence rates.The main components of the ACM for a target star are: detection efficiency as a function of SNR, the window function (WF) and the one-sigma depth function (OSDF). (Ref. Burke et al. 2015). The WF captures the falloff in transit detection probability at long periods that is determined by the observation window (the duration over which the target star has been observed). The OSDF is the transit depth (in parts per million) that yields SNR of unity for the full transit train. It is a function of period, and accounts for the time-varying properties of the noise and for missing or deweighted data.We are performing flux-level transit injection (FLTI) experiments on selected Kepler target stars with the goal of refining and validating the ACM. “Flux-level” injection machinery inserts exoplanet transit signatures directly into the flux time series, as opposed to “pixel-level” injection, which inserts transit signatures into the individual pixels using the pixel response function. See Jie Li's poster: ID #2493668, "Flux-level transit injection experiments with the NASA Pleiades Supercomputer" for details, including performance statistics.Since FLTI is affordable for only a small subset of the Kepler targets, the ACM is designed to apply to most Kepler target stars. We validate this model using “deep” FLTI experiments, with ~500,000 injection realizations on each of a small number of targets and “shallow” FLTI experiments with ~2000 injection realizations on each of many targets. From the results of these experiments, we identify anomalous targets, model their behavior and refine the ACM accordingly.In this presentation, we discuss progress in validating and refining the ACM, and we

  14. Validation of Skills, Knowledge and Experience in Lifelong Learning in Europe

    ERIC Educational Resources Information Center

    Ogunleye, James

    2012-01-01

    The paper examines systems of validation of skills and experience as well as the main methods/tools currently used for validating skills and knowledge in lifelong learning. The paper uses mixed methods--a case study research and content analysis of European Union policy documents and frameworks--as a basis for this research. The selection of the…

  15. Impact of External Cue Validity on Driving Performance in Parkinson's Disease

    PubMed Central

    Scally, Karen; Charlton, Judith L.; Iansek, Robert; Bradshaw, John L.; Moss, Simon; Georgiou-Karistianis, Nellie

    2011-01-01

    This study sought to investigate the impact of external cue validity on simulated driving performance in 19 Parkinson's disease (PD) patients and 19 healthy age-matched controls. Braking points and distance between deceleration point and braking point were analysed for red traffic signals preceded either by Valid Cues (correctly predicting signal), Invalid Cues (incorrectly predicting signal), and No Cues. Results showed that PD drivers braked significantly later and travelled significantly further between deceleration and braking points compared with controls for Invalid and No-Cue conditions. No significant group differences were observed for driving performance in response to Valid Cues. The benefit of Valid Cues relative to Invalid Cues and No Cues was significantly greater for PD drivers compared with controls. Trail Making Test (B-A) scores correlated with driving performance for PDs only. These results highlight the importance of external cues and higher cognitive functioning for driving performance in mild to moderate PD. PMID:21789275

  16. Validation and Performance Comparison of Numerical Codes for Tsunami Inundation

    NASA Astrophysics Data System (ADS)

    Velioglu, D.; Kian, R.; Yalciner, A. C.; Zaytsev, A.

    2015-12-01

    In inundation zones, tsunami motion turns from wave motion to flow of water. Modelling of this phenomenon is a complex problem since there are many parameters affecting the tsunami flow. In this respect, the performance of numerical codes that analyze tsunami inundation patterns becomes important. The computation of water surface elevation is not sufficient for proper analysis of tsunami behaviour in shallow water zones and on land and hence for the development of mitigation strategies. Velocity and velocity patterns are also crucial parameters and have to be computed at the highest accuracy. There are numerous numerical codes to be used for simulating tsunami inundation. In this study, FLOW 3D and NAMI DANCE codes are selected for validation and performance comparison. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. FLOW 3D is used specificaly for flood problems. NAMI DANCE uses finite difference computational method to solve linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In this study, these codes are validated and their performances are compared using two benchmark problems which are discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. One of the problems is an experiment of a single long-period wave propagating up a piecewise linear slope and onto a small-scale model of the town of Seaside, Oregon. Other benchmark problem is an experiment of a single solitary wave propagating up a triangular shaped shelf with an island feature located at the offshore point of the shelf. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. All results are presented with discussions and comparisons. The research leading to these

  17. Validation of hot-poured crack sealant performance-based guidelines.

    DOT National Transportation Integrated Search

    2017-06-01

    This report summarizes a comprehensive research effort to validate thresholds for performance-based guidelines and : grading system for hot-poured asphalt crack sealants. A series of performance tests were established in earlier research and : includ...

  18. TU-D-201-05: Validation of Treatment Planning Dose Calculations: Experience Working with MPPG 5.a

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, J; Park, J; Kim, L

    2016-06-15

    Purpose: Newly published medical physics practice guideline (MPPG 5.a.) has set the minimum requirements for commissioning and QA of treatment planning dose calculations. We present our experience in the validation of a commercial treatment planning system based on MPPG 5.a. Methods: In addition to tests traditionally performed to commission a model-based dose calculation algorithm, extensive tests were carried out at short and extended SSDs, various depths, oblique gantry angles and off-axis conditions to verify the robustness and limitations of a dose calculation algorithm. A comparison between measured and calculated dose was performed based on validation tests and evaluation criteria recommendedmore » by MPPG 5.a. An ion chamber was used for the measurement of dose at points of interest, and diodes were used for photon IMRT/VMAT validations. Dose profiles were measured with a three-dimensional scanning system and calculated in the TPS using a virtual water phantom. Results: Calculated and measured absolute dose profiles were compared at each specified SSD and depth for open fields. The disagreement is easily identifiable with the difference curve. Subtle discrepancy has revealed the limitation of the measurement, e.g., a spike at the high dose region and an asymmetrical penumbra observed on the tests with an oblique MLC beam. The excellent results we had (> 98% pass rate on 3%/3mm gamma index) on the end-to-end tests for both IMRT and VMAT are attributed to the quality beam data and the good understanding of the modeling. The limitation of the model and the uncertainty of measurement were considered when comparing the results. Conclusion: The extensive tests recommended by the MPPG encourage us to understand the accuracy and limitations of a dose algorithm as well as the uncertainty of measurement. Our experience has shown how the suggested tests can be performed effectively to validate dose calculation models.« less

  19. Valid and reliable authentic assessment of culminating student performance in the biomedical sciences.

    PubMed

    Oh, Deborah M; Kim, Joshua M; Garcia, Raymond E; Krilowicz, Beverly L

    2005-06-01

    There is increasing pressure, both from institutions central to the national scientific mission and from regional and national accrediting agencies, on natural sciences faculty to move beyond course examinations as measures of student performance and to instead develop and use reliable and valid authentic assessment measures for both individual courses and for degree-granting programs. We report here on a capstone course developed by two natural sciences departments, Biological Sciences and Chemistry/Biochemistry, which engages students in an important culminating experience, requiring synthesis of skills and knowledge developed throughout the program while providing the departments with important assessment information for use in program improvement. The student work products produced in the course, a written grant proposal, and an oral summary of the proposal, provide a rich source of data regarding student performance on an authentic assessment task. The validity and reliability of the instruments and the resulting student performance data were demonstrated by collaborative review by content experts and a variety of statistical measures of interrater reliability, including percentage agreement, intraclass correlations, and generalizability coefficients. The high interrater reliability reported when the assessment instruments were used for the first time by a group of external evaluators suggests that the assessment process and instruments reported here will be easily adopted by other natural science faculty.

  20. Disturbance Reduction Control Design for the ST7 Flight Validation Experiment

    NASA Technical Reports Server (NTRS)

    Maghami, P. G.; Hsu, O. C.; Markley, F. L.; Houghton, M. B.

    2003-01-01

    The Space Technology 7 experiment will perform an on-orbit system-level validation of two specific Disturbance Reduction System technologies: a gravitational reference sensor employing a free-floating test mass, and a set of micro-Newton colloidal thrusters. The ST7 Disturbance Reduction System is designed to maintain the spacecraft's position with respect to a free-floating test mass to less than 10 nm/Hz, over the frequency range of 1 to 30 mHz. This paper presents the design and analysis of the coupled, drag-free and attitude control systems that close the loop between the gravitational reference sensor and the micro-Newton thrusters, while incorporating star tracker data at low frequencies. A full 18 degree-of-freedom model, which incorporates rigid-body models of the spacecraft and two test masses, is used to evaluate the effects of actuation and measurement noise and disturbances on the performance of the drag-free system.

  1. Reaction time as an indicator of insufficient effort: Development and validation of an embedded performance validity parameter.

    PubMed

    Stevens, Andreas; Bahlo, Simone; Licha, Christina; Liske, Benjamin; Vossler-Thies, Elisabeth

    2016-11-30

    Subnormal performance in attention tasks may result from various sources including lack of effort. In this report, the derivation and validation of a performance validity parameter for reaction time is described, using a set of malingering-indices ("Slick-criteria"), and 3 independent samples of participants (total n =893). The Slick-criteria yield an estimate of the probability of malingering based on the presence of an external incentive, evidence from neuropsychological testing, from self-report and clinical data. In study (1) a validity parameter is derived using reaction time data of a sample, composed of inpatients with recent severe brain lesions not involved in litigation and of litigants with and without brain lesion. In study (2) the validity parameter is tested in an independent sample of litigants. In study (3) the parameter is applied to an independent sample comprising cooperative and non-cooperative testees. Logistic regression analysis led to a derived validity parameter based on median reaction time and standard deviation. It performed satisfactorily in studies (2) and (3) (study 2 sensitivity=0.94, specificity=1.00; study 3 sensitivity=0.79, specificity=0.87). The findings suggest that median reaction time and standard deviation may be used as indicators of negative response bias. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    NASA Astrophysics Data System (ADS)

    Murata, Isao; Ohta, Masayuki; Kusaka, Sachie; Sato, Fuminobu; Miyamaru, Hiroyuki

    2017-09-01

    There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author's group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is "equally" due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A)) making neutrons conveying the contribution, indirect controbution of neutrons (B) making the neutrons (A) and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  3. A statistical approach to selecting and confirming validation targets in -omics experiments

    PubMed Central

    2012-01-01

    Background Genomic technologies are, by their very nature, designed for hypothesis generation. In some cases, the hypotheses that are generated require that genome scientists confirm findings about specific genes or proteins. But one major advantage of high-throughput technology is that global genetic, genomic, transcriptomic, and proteomic behaviors can be observed. Manual confirmation of every statistically significant genomic result is prohibitively expensive. This has led researchers in genomics to adopt the strategy of confirming only a handful of the most statistically significant results, a small subset chosen for biological interest, or a small random subset. But there is no standard approach for selecting and quantitatively evaluating validation targets. Results Here we present a new statistical method and approach for statistically validating lists of significant results based on confirming only a small random sample. We apply our statistical method to show that the usual practice of confirming only the most statistically significant results does not statistically validate result lists. We analyze an extensively validated RNA-sequencing experiment to show that confirming a random subset can statistically validate entire lists of significant results. Finally, we analyze multiple publicly available microarray experiments to show that statistically validating random samples can both (i) provide evidence to confirm long gene lists and (ii) save thousands of dollars and hundreds of hours of labor over manual validation of each significant result. Conclusions For high-throughput -omics studies, statistical validation is a cost-effective and statistically valid approach to confirming lists of significant results. PMID:22738145

  4. Performance Validity Testing in Neuropsychology: Scientific Basis and Clinical Application-A Brief Review.

    PubMed

    Greher, Michael R; Wodushek, Thomas R

    2017-03-01

    Performance validity testing refers to neuropsychologists' methodology for determining whether neuropsychological test performances completed in the course of an evaluation are valid (ie, the results of true neurocognitive function) or invalid (ie, overly impacted by the patient's effort/engagement in testing). This determination relies upon the use of either standalone tests designed for this sole purpose, or specific scores/indicators embedded within traditional neuropsychological measures that have demonstrated this utility. In response to a greater appreciation for the critical role that performance validity issues play in neuropsychological testing and the need to measure this variable to the best of our ability, the scientific base for performance validity testing has expanded greatly over the last 20 to 30 years. As such, the majority of current day neuropsychologists in the United States use a variety of measures for the purpose of performance validity testing as part of everyday forensic and clinical practice and address this issue directly in their evaluations. The following is the first article of a 2-part series that will address the evolution of performance validity testing in the field of neuropsychology, both in terms of the science as well as the clinical application of this measurement technique. The second article of this series will review performance validity tests in terms of methods for development of these measures, and maximizing of diagnostic accuracy.

  5. The Development and Validation of a Concise Instrument for Formative Assessment of Team Leader Performance During Simulated Pediatric Resuscitations.

    PubMed

    Nadkarni, Lindsay D; Roskind, Cindy G; Auerbach, Marc A; Calhoun, Aaron W; Adler, Mark D; Kessler, David O

    2018-04-01

    The aim of this study was to assess the validity of a formative feedback instrument for leaders of simulated resuscitations. This is a prospective validation study with a fully crossed (person × scenario × rater) study design. The Concise Assessment of Leader Management (CALM) instrument was designed by pediatric emergency medicine and graduate medical education experts to be used off the shelf to evaluate and provide formative feedback to resuscitation leaders. Four experts reviewed 16 videos of in situ simulated pediatric resuscitations and scored resuscitation leader performance using the CALM instrument. The videos consisted of 4 pediatric emergency department resuscitation teams each performing in 4 pediatric resuscitation scenarios (cardiac arrest, respiratory arrest, seizure, and sepsis). We report on content and internal structure (reliability) validity of the CALM instrument. Content validity was supported by the instrument development process that involved professional experience, expert consensus, focused literature review, and pilot testing. Internal structure validity (reliability) was supported by the generalizability analysis. The main component that contributed to score variability was the person (33%), meaning that individual leaders performed differently. The rater component had almost zero (0%) contribution to variance, which implies that raters were in agreement and argues for high interrater reliability. These results provide initial evidence to support the validity of the CALM instrument as a reliable assessment instrument that can facilitate formative feedback to leaders of pediatric simulated resuscitations.

  6. Ride qualities criteria validation/pilot performance study: Flight test results

    NASA Technical Reports Server (NTRS)

    Nardi, L. U.; Kawana, H. Y.; Greek, D. C.

    1979-01-01

    Pilot performance during a terrain following flight was studied for ride quality criteria validation. Data from manual and automatic terrain following operations conducted during low level penetrations were analyzed to determine the effect of ride qualities on crew performance. The conditions analyzed included varying levels of turbulence, terrain roughness, and mission duration with a ride smoothing system on and off. Limited validation of the B-1 ride quality criteria and some of the first order interactions between ride qualities and pilot/vehicle performance are highlighted. An earlier B-1 flight simulation program correlated well with the flight test results.

  7. Numerical studies and metric development for validation of magnetohydrodynamic models on the HIT-SI experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, C., E-mail: hansec@uw.edu; Columbia University, New York, New York 10027; Victor, B.

    We present application of three scalar metrics derived from the Biorthogonal Decomposition (BD) technique to evaluate the level of agreement between macroscopic plasma dynamics in different data sets. BD decomposes large data sets, as produced by distributed diagnostic arrays, into principal mode structures without assumptions on spatial or temporal structure. These metrics have been applied to validation of the Hall-MHD model using experimental data from the Helicity Injected Torus with Steady Inductive helicity injection experiment. Each metric provides a measure of correlation between mode structures extracted from experimental data and simulations for an array of 192 surface-mounted magnetic probes. Numericalmore » validation studies have been performed using the NIMROD code, where the injectors are modeled as boundary conditions on the flux conserver, and the PSI-TET code, where the entire plasma volume is treated. Initial results from a comprehensive validation study of high performance operation with different injector frequencies are presented, illustrating application of the BD method. Using a simplified (constant, uniform density and temperature) Hall-MHD model, simulation results agree with experimental observation for two of the three defined metrics when the injectors are driven with a frequency of 14.5 kHz.« less

  8. Computational Design and Discovery of Ni-Based Alloys and Coatings: Thermodynamic Approaches Validated by Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zi-Kui; Gleeson, Brian; Shang, Shunli

    This project developed computational tools that can complement and support experimental efforts in order to enable discovery and more efficient development of Ni-base structural materials and coatings. The project goal was reached through an integrated computation-predictive and experimental-validation approach, including first-principles calculations, thermodynamic CALPHAD (CALculation of PHAse Diagram), and experimental investigations on compositions relevant to Ni-base superalloys and coatings in terms of oxide layer growth and microstructure stabilities. The developed description included composition ranges typical for coating alloys and, hence, allow for prediction of thermodynamic properties for these material systems. The calculation of phase compositions, phase fraction, and phase stabilities,more » which are directly related to properties such as ductility and strength, was a valuable contribution, along with the collection of computational tools that are required to meet the increasing demands for strong, ductile and environmentally-protective coatings. Specifically, a suitable thermodynamic description for the Ni-Al-Cr-Co-Si-Hf-Y system was developed for bulk alloy and coating compositions. Experiments were performed to validate and refine the thermodynamics from the CALPHAD modeling approach. Additionally, alloys produced using predictions from the current computational models were studied in terms of their oxidation performance. Finally, results obtained from experiments aided in the development of a thermodynamic modeling automation tool called ESPEI/pycalphad - for more rapid discovery and development of new materials.« less

  9. A High Performance Pulsatile Pump for Aortic Flow Experiments in 3-Dimensional Models.

    PubMed

    Chaudhury, Rafeed A; Atlasman, Victor; Pathangey, Girish; Pracht, Nicholas; Adrian, Ronald J; Frakes, David H

    2016-06-01

    Aortic pathologies such as coarctation, dissection, and aneurysm represent a particularly emergent class of cardiovascular diseases. Computational simulations of aortic flows are growing increasingly important as tools for gaining understanding of these pathologies, as well as for planning their surgical repair. In vitro experiments are required to validate the simulations against real world data, and the experiments require a pulsatile flow pump system that can provide physiologic flow conditions characteristic of the aorta. We designed a newly capable piston-based pulsatile flow pump system that can generate high volume flow rates (850 mL/s), replicate physiologic waveforms, and pump high viscosity fluids against large impedances. The system is also compatible with a broad range of fluid types, and is operable in magnetic resonance imaging environments. Performance of the system was validated using image processing-based analysis of piston motion as well as particle image velocimetry. The new system represents a more capable pumping solution for aortic flow experiments than other available designs, and can be manufactured at a relatively low cost.

  10. Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods.

    PubMed

    Liu, Boquan; Polce, Evan; Sprott, Julien C; Jiang, Jack J

    2018-05-17

    The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100 Monte Carlo experiments were applied to analyze the output of jitter, shimmer, correlation dimension, and spectrum convergence ratio. The computational output of the 4 classifiers was then plotted against signal chaos level to investigate the performance of these acoustic analysis methods under varying degrees of signal chaos. A diffusive behavior detection-based chaos level test was used to investigate the performances of different voice classification methods. Voice signals were constructed by varying the signal-to-noise ratio to establish differing signal chaos conditions. Chaos level increased sigmoidally with increasing noise power. Jitter and shimmer performed optimally when the chaos level was less than or equal to 0.01, whereas correlation dimension was capable of analyzing signals with chaos levels of less than or equal to 0.0179. Spectrum convergence ratio demonstrated proficiency in analyzing voice signals with all chaos levels investigated in this study. The results of this study corroborate the performance relationships observed in previous studies and, therefore, demonstrate the validity of the validation test method. The presented chaos level validation test could be broadly utilized to evaluate acoustic analysis methods and establish the most appropriate methodology for objective voice analysis in clinical practice.

  11. Pathways to Engineering: The Validation Experiences of Transfer Students

    ERIC Educational Resources Information Center

    Zhang, Yi; Ozuna, Taryn

    2015-01-01

    Community college engineering transfer students are a critical student population of engineering degree recipients and technical workforce in the United States. Focusing on this group of students, we adopted Rendón's (1994) validation theory to explore the students' experiences in community colleges prior to transferring to a four-year…

  12. Construction and Initial Validation of the Multiracial Experiences Measure (MEM)

    PubMed Central

    Yoo, Hyung Chol; Jackson, Kelly; Guevarra, Rudy P.; Miller, Matthew J.; Harrington, Blair

    2015-01-01

    This article describes the development and validation of the Multiracial Experiences Measure (MEM): a new measure that assesses uniquely racialized risks and resiliencies experienced by individuals of mixed racial heritage. Across two studies, there was evidence for the validation of the 25-item MEM with 5 subscales including Shifting Expressions, Perceived Racial Ambiguity, Creating Third Space, Multicultural Engagement, and Multiracial Discrimination. The 5-subscale structure of the MEM was supported by a combination of exploratory and confirmatory factor analyses. Evidence of criterion-related validity was partially supported with MEM subscales correlating with measures of racial diversity in one’s social network, color-blind racial attitude, psychological distress, and identity conflict. Evidence of discriminant validity was supported with MEM subscales not correlating with impression management. Implications for future research and suggestions for utilization of the MEM in clinical practice with multiracial adults are discussed. PMID:26460977

  13. Construction and initial validation of the Multiracial Experiences Measure (MEM).

    PubMed

    Yoo, Hyung Chol; Jackson, Kelly F; Guevarra, Rudy P; Miller, Matthew J; Harrington, Blair

    2016-03-01

    This article describes the development and validation of the Multiracial Experiences Measure (MEM): a new measure that assesses uniquely racialized risks and resiliencies experienced by individuals of mixed racial heritage. Across 2 studies, there was evidence for the validation of the 25-item MEM with 5 subscales including Shifting Expressions, Perceived Racial Ambiguity, Creating Third Space, Multicultural Engagement, and Multiracial Discrimination. The 5-subscale structure of the MEM was supported by a combination of exploratory and confirmatory factor analyses. Evidence of criterion-related validity was partially supported with MEM subscales correlating with measures of racial diversity in one's social network, color-blind racial attitude, psychological distress, and identity conflict. Evidence of discriminant validity was supported with MEM subscales not correlating with impression management. Implications for future research and suggestions for utilization of the MEM in clinical practice with multiracial adults are discussed. (c) 2016 APA, all rights reserved).

  14. Psychological collectivism: a measurement validation and linkage to group member performance.

    PubMed

    Jackson, Christine L; Colquitt, Jason A; Wesson, Michael J; Zapata-Phelan, Cindy P

    2006-07-01

    The 3 studies presented here introduce a new measure of the individual-difference form of collectivism. Psychological collectivism is conceptualized as a multidimensional construct with the following 5 facets: preference for in-groups, reliance on in-groups, concern for in-groups, acceptance of in-group norms, and prioritization of in-group goals. Study 1 developed and tested the new measure in a sample of consultants. Study 2 cross-validated the measure using an alumni sample of a Southeastern university, assessing its convergent validity with other collectivism measures. Study 3 linked scores on the measure to 4 dimensions of group member performance (task performance, citizenship behavior, counterproductive behavior, and withdrawal behavior) in a computer software firm and assessed discriminant validity using the Big Five. The results of the studies support the construct validity of the measure and illustrate the potential value of collectivism as a predictor of group member performance. ((c) 2006 APA, all rights reserved).

  15. Standards Performance Continuum: Development and Validation of a Measure of Effective Pedagogy.

    ERIC Educational Resources Information Center

    Doherty, R. William; Hilberg, R. Soleste; Epaloose, Georgia; Tharp, Roland G.

    2002-01-01

    Describes the development and validation of the Standards Performance Continuum (SPC) for assessing teacher performance of the Standards for Effective Pedagogy. Three studies involving Florida, California, and New Mexico public school teachers provided evidence of inter-rater reliability, concurrent validity, and criterion-related validity…

  16. Father for the first time - development and validation of a questionnaire to assess fathers’ experiences of first childbirth (FTFQ)

    PubMed Central

    2012-01-01

    Background A father’s experience of the birth of his first child is important not only for his birth-giving partner but also for the father himself, his relationship with the mother and the newborn. No validated questionnaire assessing first-time fathers' experiences during childbirth is currently available. Hence, the aim of this study was to develop and validate an instrument to assess first-time fathers’ experiences of childbirth. Method Domains and items were initially derived from interviews with first-time fathers, and supplemented by a literature search and a focus group interview with midwives. The comprehensibility, comprehension and relevance of the items were evaluated by four paternity research experts and a preliminary questionnaire was pilot tested in eight first-time fathers. A revised questionnaire was completed by 200 first-time fathers (response rate = 81%) Exploratory factor analysis using principal component analysis with varimax rotation was performed and multitrait scaling analysis was used to test scaling assumptions. External validity was assessed by means of known-groups analysis. Results Factor analysis yielded four factors comprising 22 items and accounting 48% of the variance. The domains found were Worry, Information, Emotional support and Acceptance. Multitrait analysis confirmed the convergent and discriminant validity of the domains; however, Cronbach’s alpha did not meet conventional reliability standards in two domains. The questionnaire was sensitive to differences between groups of fathers hypothesized to differ on important socio demographic or clinical variables. Conclusions The questionnaire adequately measures important dimensions of first-time fathers’ childbirth experience and may be used to assess aspects of fathers’ experiences during childbirth. To obtain the FTFQ and permission for its use, please contact the corresponding author. PMID:22594834

  17. Father for the first time--development and validation of a questionnaire to assess fathers' experiences of first childbirth (FTFQ).

    PubMed

    Premberg, Åsa; Taft, Charles; Hellström, Anna-Lena; Berg, Marie

    2012-05-17

    A father's experience of the birth of his first child is important not only for his birth-giving partner but also for the father himself, his relationship with the mother and the newborn. No validated questionnaire assessing first-time fathers' experiences during childbirth is currently available. Hence, the aim of this study was to develop and validate an instrument to assess first-time fathers' experiences of childbirth. Domains and items were initially derived from interviews with first-time fathers, and supplemented by a literature search and a focus group interview with midwives. The comprehensibility, comprehension and relevance of the items were evaluated by four paternity research experts and a preliminary questionnaire was pilot tested in eight first-time fathers. A revised questionnaire was completed by 200 first-time fathers (response rate = 81%) Exploratory factor analysis using principal component analysis with varimax rotation was performed and multitrait scaling analysis was used to test scaling assumptions. External validity was assessed by means of known-groups analysis. Factor analysis yielded four factors comprising 22 items and accounting 48% of the variance. The domains found were Worry, Information, Emotional support and Acceptance. Multitrait analysis confirmed the convergent and discriminant validity of the domains; however, Cronbach's alpha did not meet conventional reliability standards in two domains. The questionnaire was sensitive to differences between groups of fathers hypothesized to differ on important socio demographic or clinical variables. The questionnaire adequately measures important dimensions of first-time fathers' childbirth experience and may be used to assess aspects of fathers' experiences during childbirth. To obtain the FTFQ and permission for its use, please contact the corresponding author.

  18. Reliability and Validity of the Turkish Version of the Job Performance Scale Instrument.

    PubMed

    Harmanci Seren, Arzu Kader; Tuna, Rujnan; Eskin Bacaksiz, Feride

    2018-02-01

    Objective measurement of the job performance of nursing staff using valid and reliable instruments is important in the evaluation of healthcare quality. A current, valid, and reliable instrument that specifically measures the performance of nurses is required for this purpose. The aim of this study was to determine the validity and reliability of the Turkish version of the Job Performance Instrument. This study used a methodological design and a sample of 240 nurses working at different units in four hospitals in Istanbul, Turkey. A descriptive data form, the Job Performance Scale, and the Employee Performance Scale were used to collect data. Data were analyzed using IBM SPSS Statistics Version 21.0 and LISREL Version 8.51. On the basis of the data analysis, the instrument was revised. Some items were deleted, and subscales were combined. The Turkish version of the Job Performance Instrument was determined to be valid and reliable to measure the performance of nurses. The instrument is suitable for evaluating current nursing roles.

  19. A Validation Study of the Adolescent Dissociative Experiences Scale

    ERIC Educational Resources Information Center

    Keck Seeley, Susan. M.; Perosa, Sandra, L.; Perosa, Linda, M.

    2004-01-01

    Objective: The purpose of this study was to further the validation process of the Adolescent Dissociative Experiences Scale (A-DES). In this study, a 6-item Likert response format with descriptors was used when responding to the A-DES rather than the 11-item response format used in the original A-DES. Method: The internal reliability and construct…

  20. Development and Validation of High Performance Liquid Chromatography Method for Determination Atorvastatin in Tablet

    NASA Astrophysics Data System (ADS)

    Yugatama, A.; Rohmani, S.; Dewangga, A.

    2018-03-01

    Atorvastatin is the primary choice for dyslipidemia treatment. Due to patent expiration of atorvastatin, the pharmaceutical industry makes copy of the drug. Therefore, the development methods for tablet quality tests involving atorvastatin concentration on tablets needs to be performed. The purpose of this research was to develop and validate the simple atorvastatin tablet analytical method by HPLC. HPLC system used in this experiment consisted of column Cosmosil C18 (150 x 4,6 mm, 5 µm) as the stationary reverse phase chomatography, a mixture of methanol-water at pH 3 (80:20 v/v) as the mobile phase, flow rate of 1 mL/min, and UV detector at wavelength of 245 nm. Validation methods were including: selectivity, linearity, accuracy, precision, limit of detection (LOD), and limit of quantitation (LOQ). The results of this study indicate that the developed method had good validation including selectivity, linearity, accuracy, precision, LOD, and LOQ for analysis of atorvastatin tablet content. LOD and LOQ were 0.2 and 0.7 ng/mL, and the linearity range were 20 - 120 ng/mL.

  1. Exploring a Framework for Consequential Validity for Performance-Based Assessments

    ERIC Educational Resources Information Center

    Kim, Su Jung

    2017-01-01

    This study explores a new comprehensive framework for understanding elements of validity, specifically for performance assessments that are administered within specific and dynamic contexts. The adoption of edTPA is a good empirical case for examining the concept of consequential validity because this assessment has been implemented at the state…

  2. The Development and Validation of the Game User Experience Satisfaction Scale (GUESS).

    PubMed

    Phan, Mikki H; Keebler, Joseph R; Chaparro, Barbara S

    2016-12-01

    The aim of this study was to develop and psychometrically validate a new instrument that comprehensively measures video game satisfaction based on key factors. Playtesting is often conducted in the video game industry to help game developers build better games by providing insight into the players' attitudes and preferences. However, quality feedback is difficult to obtain from playtesting sessions without a quality gaming assessment tool. There is a need for a psychometrically validated and comprehensive gaming scale that is appropriate for playtesting and game evaluation purposes. The process of developing and validating this new scale followed current best practices of scale development and validation. As a result, a mixed-method design that consisted of item pool generation, expert review, questionnaire pilot study, exploratory factor analysis (N = 629), and confirmatory factor analysis (N = 729) was implemented. A new instrument measuring video game satisfaction, called the Game User Experience Satisfaction Scale (GUESS), with nine subscales emerged. The GUESS was demonstrated to have content validity, internal consistency, and convergent and discriminant validity. The GUESS was developed and validated based on the assessments of over 450 unique video game titles across many popular genres. Thus, it can be applied across many types of video games in the industry both as a way to assess what aspects of a game contribute to user satisfaction and as a tool to aid in debriefing users on their gaming experience. The GUESS can be administered to evaluate user satisfaction of different types of video games by a variety of users. © 2016, Human Factors and Ergonomics Society.

  3. The Play Experience Scale: development and validation of a measure of play.

    PubMed

    Pavlas, Davin; Jentsch, Florian; Salas, Eduardo; Fiore, Stephen M; Sims, Valerie

    2012-04-01

    A measure of play experience in video games was developed through literature review and two empirical validation studies. Despite the considerable attention given to games in the behavioral sciences, play experience remains empirically underexamined. One reason for this gap is the absence of a scale that measures play experience. In Study 1, the initial Play Experience Scale (PES) was tested through an online validation that featured three different games (N = 203). In Study 2, a revised PES was assessed with a serious game in the laboratory (N = 77). Through principal component analysis of the Study 1 data, the initial 20-item PES was revised, resulting in the 16-item PES-16. Study 2 showed the PES-16 to be a robust instrument with the same patterns of correlations as in Study 1 via (a) internal consistency estimates, (b) correlations with established scales of motivation, (c) distributions of PES-16 scores in different game conditions, and (d) examination of the average variance extracted of the PES and the Intrinsic Motivation Scale. We suggest that the PES is appropriate for use in further validation studies. Additional examinations of the scale are required to determine its applicability to other contexts and its relationship with other constructs. The PES is potentially relevant to human factors undertakings involving video games, including basic research into play, games, and learning; prototype testing; and exploratory learning studies.

  4. Validity Evidence for a Serious Game to Assess Performance on Critical Pediatric Emergency Medicine Scenarios.

    PubMed

    Gerard, James M; Scalzo, Anthony J; Borgman, Matthew A; Watson, Christopher M; Byrnes, Chelsie E; Chang, Todd P; Auerbach, Marc; Kessler, David O; Feldman, Brian L; Payne, Brian S; Nibras, Sohail; Chokshi, Riti K; Lopreiato, Joseph O

    2018-06-01

    We developed a first-person serious game, PediatricSim, to teach and assess performances on seven critical pediatric scenarios (anaphylaxis, bronchiolitis, diabetic ketoacidosis, respiratory failure, seizure, septic shock, and supraventricular tachycardia). In the game, players are placed in the role of a code leader and direct patient management by selecting from various assessment and treatment options. The objective of this study was to obtain supportive validity evidence for the PediatricSim game scores. Game content was developed by 11 subject matter experts and followed the American Heart Association's 2011 Pediatric Advanced Life Support Provider Manual and other authoritative references. Sixty subjects with three different levels of experience were enrolled to play the game. Before game play, subjects completed a 40-item written pretest of knowledge. Game scores were compared between subject groups using scoring rubrics developed for the scenarios. Validity evidence was established and interpreted according to Messick's framework. Content validity was supported by a game development process that involved expert experience, focused literature review, and pilot testing. Subjects rated the game favorably for engagement, realism, and educational value. Interrater agreement on game scoring was excellent (intraclass correlation coefficient = 0.91, 95% confidence interval = 0.89-0.9). Game scores were higher for attendings followed by residents then medical students (Pc < 0.01) with large effect sizes (1.6-4.4) for each comparison. There was a very strong, positive correlation between game and written test scores (r = 0.84, P < 0.01). These findings contribute validity evidence for PediatricSim game scores to assess knowledge of pediatric emergency medicine resuscitation.

  5. Structural and Convergent Validity of the Homework Performance Questionnaire

    ERIC Educational Resources Information Center

    Pendergast, Laura L.; Watkins, Marley W.; Canivez, Gary L.

    2014-01-01

    Homework is a requirement for most school-age children, but research on the benefits and drawbacks of homework is limited by lack of psychometrically sound measurement of homework performance. This study examined the structural and convergent validity of scores from the newly developed Homework Performance Questionnaire -- Teacher Scale (HPQ-T).…

  6. Physical performance tests after stroke: reliability and validity.

    PubMed

    Maeda, A; Yuasa, T; Nakamura, K; Higuchi, S; Motohashi, Y

    2000-01-01

    To evaluate the reliability and validity of the modified physical performance tests for stroke survivors who live in a community. The subjects included 40 stroke survivors and 40 apparently healthy independent elderly persons. The physical performance tests for the stroke survivors comprised two physical capacity evaluation tasks that represented physical abilities necessary to perform the main activities of daily living, e.g., standing-up ability (time needed to stand up from bed rest) and walking ability (time needed to walk 10 m). Regarding the reliability of tests, significant correlations were confirmed between test and retest of physical performance tests with both short and long intervals in individuals after stroke. Regarding the validity of tests, the authors studied the significant correlations between the maximum isometric strength of the quardriceps muscle and the time needed to walk 10 m, centimeters reached while sitting and reaching, and the time needed to stand up from bed rest. The authors confirmed that there were significant correlations between the instrumental activity of daily living and the time needed to stand up from bed rest, along with the time needed to walk 10 m for the stroke survivors. These physical performance tests are useful guides for evaluating a level of activity of daily living and physical frailty of stroke survivors living in a community.

  7. EAQUATE: An International Experiment for Hyper-Spectral Atmospheric Sounding Validation

    NASA Technical Reports Server (NTRS)

    Taylor, J. P.; Smith, W.; Cuomo, V.; Larar, A.; Zhou, D.; Serio, C.; Maestri, T.; Rizzi, R.; Newman, S.; Antonelli, P.; hide

    2008-01-01

    The international experiment called EAQUATE (European AQUA Thermodynamic Experiment) was held in September 2004 in Italy and the United Kingdom to demonstrate certain ground-based and airborne systems useful for validating hyperspectral satellite sounding observations. A range of flights over land and marine surfaces were conducted to coincide with overpasses of the AIRS instrument on the EOS Aqua platform. Direct radiance evaluation of AIRS using NAST-I and SHIS has shown excellent agreement. Comparisons of level 2 retrievals of temperature and water vapor from AIRS and NAST-I validated against high quality lidar and drop sonde data show that the 1K/1km and 10%/1km requirements for temperature and water vapor (respectively) are generally being met. The EAQUATE campaign has proven the need for synergistic measurements from a range of observing systems for satellite cal/val and has paved the way for future cal/val activities in support of IASI on the European Metop platform and CrIS on the US NPP/NPOESS platform.

  8. In-Space Structural Validation Plan for a Stretched-Lens Solar Array Flight Experiment

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Woods-Vedeler, Jessica A.; Jones, Thomas W.

    2001-01-01

    This paper summarizes in-space structural validation plans for a proposed Space Shuttle-based flight experiment. The test article is an innovative, lightweight solar array concept that uses pop-up, refractive stretched-lens concentrators to achieve a power/mass density of at least 175 W/kg, which is more than three times greater than current capabilities. The flight experiment will validate this new technology to retire the risk associated with its first use in space. The experiment includes structural diagnostic instrumentation to measure the deployment dynamics, static shape, and modes of vibration of the 8-meter-long solar array and several of its lenses. These data will be obtained by photogrammetry using the Shuttle payload-bay video cameras and miniature video cameras on the array. Six accelerometers are also included in the experiment to measure base excitations and small-amplitude tip motions.

  9. Validating workplace performance assessments in health sciences students: a case study from speech pathology.

    PubMed

    McAllister, Sue; Lincoln, Michelle; Ferguson, Allison; McAllister, Lindy

    2013-01-01

    Valid assessment of health science students' ability to perform in the real world of workplace practice is critical for promoting quality learning and ultimately certifying students as fit to enter the world of professional practice. Current practice in performance assessment in the health sciences field has been hampered by multiple issues regarding assessment content and process. Evidence for the validity of scores derived from assessment tools are usually evaluated against traditional validity categories with reliability evidence privileged over validity, resulting in the paradoxical effect of compromising the assessment validity and learning processes the assessments seek to promote. Furthermore, the dominant statistical approaches used to validate scores from these assessments fall under the umbrella of classical test theory approaches. This paper reports on the successful national development and validation of measures derived from an assessment of Australian speech pathology students' performance in the workplace. Validation of these measures considered each of Messick's interrelated validity evidence categories and included using evidence generated through Rasch analyses to support score interpretation and related action. This research demonstrated that it is possible to develop an assessment of real, complex, work based performance of speech pathology students, that generates valid measures without compromising the learning processes the assessment seeks to promote. The process described provides a model for other health professional education programs to trial.

  10. Explicating Experience: Development of a Valid Scale of Past Hazard Experience for Tornadoes.

    PubMed

    Demuth, Julie L

    2018-03-23

    People's past experiences with a hazard theoretically influence how they approach future risks. Yet, past hazard experience has been conceptualized and measured in wide-ranging, often simplistic, ways, resulting in mixed findings about its relationship with risk perception. This study develops a scale of past hazard experiences, in the context of tornadoes, that is content and construct valid. A conceptual definition was developed, a set of items were created to measure one's most memorable and multiple tornado experiences, and the measures were evaluated through two surveys of the public who reside in tornado-prone areas. Four dimensions emerged of people's most memorable experience, reflecting their awareness of the tornado risk that day, their personalization of the risk, the intrusive impacts on them personally, and impacts experienced vicariously through others. Two dimensions emerged of people's multiple experiences, reflecting common types of communication received and negative emotional responses. These six dimensions are novel in that they capture people's experience across the timeline of a hazard as well as intangible experiences that are both direct and indirect. The six tornado experience dimensions were correlated with tornado risk perceptions measured as cognitive-affective and as perceived probability of consequences. The varied experience-risk perception results suggest that it is important to understand the nuances of these concepts and their relationships. This study provides a foundation for future work to continue explicating past hazard experience, across different risk contexts, and for understanding its effect on risk assessment and responses. © 2018 Society for Risk Analysis.

  11. Validation of Air-Backed Underwater Explosion Experiments with ALE3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leininger, L D

    2005-02-04

    This paper summarizes an exercise carried out to validate the process of implementing LLNL's ALE3D to predict the permanent deformation and rupture of an air-backed steel plate subjected to underwater shock. Experiments were performed in a shock tank at the Naval Science and Technology Laboratory in Visakhapatnam India, and the results are documented in reference. A consistent set of air-backed plates is subjected to shocks from increasing weights of explosives ranging from 10g-80g. At 40g and above, rupture is recorded in the experiment and, without fracture mechanics implemented in ALE3D, only the cases of 10g, 20g, and 30g are presentedmore » here. This methodology applies the Jones-Wilkins-Lee (JWL) Equation of State (EOS) to predict the pressure of the expanding detonation products, the Gruneisein EOS for water under highly dynamic compressible flow - both on 1-point integrated 3-d continuum elements. The steel plates apply a bilinear elastic-plastic response with failure and are simulated with 3-point integrated shell elements. The failure for this exercise is based on effective (or equivalent) plastic strain.« less

  12. The Development and Validation of a Rubric to Enhance Performer Feedback for Undergraduate Vocal Solo Performance

    ERIC Educational Resources Information Center

    Herrell, Katherine A.

    2014-01-01

    This is a study of the development and validation of a rubric to enhance performer feedback for undergraduate vocal solo performance. In the literature, assessment of vocal performance is under-represented, and the value of feedback from the assessment of musical performances, from the point of view of the performer, is nonexistent. The research…

  13. The reliability and validity of the Complex Task Performance Assessment: A performance-based assessment of executive function.

    PubMed

    Wolf, Timothy J; Dahl, Abigail; Auen, Colleen; Doherty, Meghan

    2017-07-01

    The objective of this study was to evaluate the inter-rater reliability, test-retest reliability, concurrent validity, and discriminant validity of the Complex Task Performance Assessment (CTPA): an ecologically valid performance-based assessment of executive function. Community control participants (n = 20) and individuals with mild stroke (n = 14) participated in this study. All participants completed the CTPA and a battery of cognitive assessments at initial testing. The control participants completed the CTPA at two different times one week apart. The intra-class correlation coefficient (ICC) for inter-rater reliability for the total score on the CTPA was .991. The ICCs for all of the sub-scores of the CTPA were also high (.889-.977). The CTPA total score was significantly correlated to Condition 4 of the DKEFS Color-Word Interference Test (p = -.425), and the Wechsler Test of Adult Reading (p  = -.493). Finally, there were significant differences between control subjects and individuals with mild stroke on the total score of the CTPA (p = .007) and all sub-scores except interpretation failures and total items incorrect. These results are also consistent with other current executive function performance-based assessments and indicate that the CTPA is a reliable and valid performance-based measure of executive function.

  14. SMAP Validation Experiment 2015 (SMAPVEX15)

    NASA Astrophysics Data System (ADS)

    Colliander, A.; Jackson, T. J.; Cosh, M. H.; Misra, S.; Crow, W. T.; Chae, C. S.; Moghaddam, M.; O'Neill, P. E.; Entekhabi, D.; Yueh, S. H.

    2015-12-01

    NASA's (National Aeronautics and Space Administration) Soil Moisture Active Passive (SMAP) mission was launched in January 2015. The objective of the mission is global mapping of soil moisture and freeze/thaw state. For soil moisture algorithm validation, the SMAP project and NASA coordinated SMAPVEX15 around the Walnut Gulch Experimental Watershed (WGEW) in Tombstone, Arizona on August 1-19, 2015. The main goals of SMAPVEX15 are to understand the effects and contribution of heterogeneity on the soil moisture retrievals, evaluate the impact of known RFI sources on retrieval, and analyze the brightness temperature product calibration and heterogeneity effects. Additionally, the campaign aims to contribute to the validation of GPM (Global Precipitation Mission) data products. The campaign will feature three airborne microwave instruments: PALS (Passive Active L-band System), UAVSAR (Uninhabited Aerial Vehicle Synthetic Aperture Radar) and AirMOSS (Airborne Microwave Observatory of Subcanopy and Subsurface). PALS has L-band radiometer and radar, and UAVSAR and AirMOSS have L- and P-band synthetic aperture radars, respectively. The PALS instrument will map the area on seven days coincident with SMAP overpasses; UAVSAR and AirMOSS on four days. WGEW was selected as the experiment site due to the rainfall patterns in August and existing dense networks of precipitation gages and soil moisture sensors. An additional temporary network of approximately 80 soil moisture stations was deployed in the region. Rainfall observations were supplemented with two X-band mobile scanning radars, approximately 25 tipping bucket rain gauges, three laser disdrometers, and three vertically-profiling K-band radars. Teams were on the field to take soil moisture samples for gravimetric soil moisture, bulk density and rock fraction determination as well as to measure surface roughness and vegetation water content. In this talk we will present preliminary results from the experiment including

  15. Validation of multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Siewiorek, D. P.; Segall, Z.; Kong, T.

    1982-01-01

    Experiments that can be used to validate fault free performance of multiprocessor systems in aerospace systems integrating flight controls and avionics are discussed. Engineering prototypes for two fault tolerant multiprocessors are tested.

  16. Reliable and valid tools for measuring surgeons' teaching performance: residents' vs. self evaluation.

    PubMed

    Boerebach, Benjamin C M; Arah, Onyebuchi A; Busch, Olivier R C; Lombarts, Kiki M J M H

    2012-01-01

    In surgical education, there is a need for educational performance evaluation tools that yield reliable and valid data. This paper describes the development and validation of robust evaluation tools that provide surgeons with insight into their clinical teaching performance. We investigated (1) the reliability and validity of 2 tools for evaluating the teaching performance of attending surgeons in residency training programs, and (2) whether surgeons' self evaluation correlated with the residents' evaluation of those surgeons. We surveyed 343 surgeons and 320 residents as part of a multicenter prospective cohort study of faculty teaching performance in residency training programs. The reliability and validity of the SETQ (System for Evaluation Teaching Qualities) tools were studied using standard psychometric techniques. We then estimated the correlations between residents' and surgeons' evaluations. The response rate was 87% among surgeons and 84% among residents, yielding 2625 residents' evaluations and 302 self evaluations. The SETQ tools yielded reliable and valid data on 5 domains of surgical teaching performance, namely, learning climate, professional attitude towards residents, communication of goals, evaluation of residents, and feedback. The correlations between surgeons' self and residents' evaluations were low, with coefficients ranging from 0.03 for evaluation of residents to 0.18 for communication of goals. The SETQ tools for the evaluation of surgeons' teaching performance appear to yield reliable and valid data. The lack of strong correlations between surgeons' self and residents' evaluations suggest the need for using external feedback sources in informed self evaluation of surgeons. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  17. Validating YouTube Factors Affecting Learning Performance

    NASA Astrophysics Data System (ADS)

    Pratama, Yoga; Hartanto, Rudy; Suning Kusumawardani, Sri

    2018-03-01

    YouTube is often used as a companion medium or a learning supplement. One of the educational places that often uses is Jogja Audio School (JAS) which focuses on music production education. Music production is a difficult material to learn, especially at the audio mastering. With tutorial contents from YouTube, students find it easier to learn and understand audio mastering and improved their learning performance. This study aims to validate the role of YouTube as a medium of learning in improving student’s learning performance by looking at the factors that affect student learning performance. The sample involves 100 respondents from JAS at audio mastering level. The results showed that student learning performance increases seen from factors that have a significant influence of motivation, instructional content, and YouTube usefulness. Overall findings suggest that YouTube has a important role to student learning performance in music production education and as an innovative and efficient learning medium.

  18. Issues in developing valid assessments of speech pathology students' performance in the workplace.

    PubMed

    McAllister, Sue; Lincoln, Michelle; Ferguson, Alison; McAllister, Lindy

    2010-01-01

    Workplace-based learning is a critical component of professional preparation in speech pathology. A validated assessment of this learning is seen to be 'the gold standard', but it is difficult to develop because of design and validation issues. These issues include the role and nature of judgement in assessment, challenges in measuring quality, and the relationship between assessment and learning. Valid assessment of workplace-based performance needs to capture the development of competence over time and account for both occupation specific and generic competencies. This paper reviews important conceptual issues in the design of valid and reliable workplace-based assessments of competence including assessment content, process, impact on learning, measurement issues, and validation strategies. It then goes on to share what has been learned about quality assessment and validation of a workplace-based performance assessment using competency-based ratings. The outcomes of a four-year national development and validation of an assessment tool are described. A literature review of issues in conceptualizing, designing, and validating workplace-based assessments was conducted. Key factors to consider in the design of a new tool were identified and built into the cycle of design, trialling, and data analysis in the validation stages of the development process. This paper provides an accessible overview of factors to consider in the design and validation of workplace-based assessment tools. It presents strategies used in the development and national validation of a tool COMPASS, used in an every speech pathology programme in Australia, New Zealand, and Singapore. The paper also describes Rasch analysis, a model-based statistical approach which is useful for establishing validity and reliability of assessment tools. Through careful attention to conceptual and design issues in the development and trialling of workplace-based assessments, it has been possible to develop the

  19. Validity of three clinical performance assessments of internal medicine clerks.

    PubMed

    Hull, A L; Hodder, S; Berger, B; Ginsberg, D; Lindheim, N; Quan, J; Kleinhenz, M E

    1995-06-01

    To analyze the construct validity of three methods to assess the clinical performances of internal medicine clerks. A multitrait-multimethod (MTMM) study was conducted at the Case Western Reserve University School of Medicine to determine the convergent and divergent validity of a clinical evaluation form (CEF) completed by faculty and residents, an objective structured clinical examination (OSCE), and the medicine subject test of the National Board of Medical Examiners. Three traits were involved in the analysis: clinical skills, knowledge, and personal characteristics. A correlation matrix was computed for 410 third-year students who completed the clerkship between August 1988 and July 1991. There was a significant (p < .01) convergence of the four correlations that assessed the same traits by using different methods. However, the four convergent correlations were of moderate magnitude (ranging from .29 to .47). Divergent validity was assessed by comparing the magnitudes of the convergence correlations with the magnitudes of correlations among unrelated assessments (i.e., different traits by different methods). Seven of nine possible coefficients were smaller than the convergent coefficients, suggesting evidence of divergent validity. A significant CEF method effect was identified. There was convergent validity and some evidence of divergent validity with a significant method effect. The findings were similar for correlations corrected for attenuation. Four conclusions were reached: (1) the reliability of the OSCE must be improved, (2) the CEF ratings must be redesigned to further discriminate among the specific traits assessed, (3) additional methods to assess personal characteristics must be instituted, and (4) several assessment methods should be used to evaluate individual student performances.

  20. Use of the color trails test as an embedded measure of performance validity.

    PubMed

    Henry, George K; Algina, James

    2013-01-01

    One hundred personal injury litigants and disability claimants referred for a forensic neuropsychological evaluation were administered both portions of the Color Trails Test (CTT) as part of a more comprehensive battery of standardized tests. Subjects who failed two or more free-standing tests of cognitive performance validity formed the Failed Performance Validity (FPV) group, while subjects who passed all free-standing performance validity measures were assigned to the Passed Performance Validity (PPV) group. A cutscore of ≥45 seconds to complete Color Trails 1 (CT1) was associated with a classification accuracy of 78%, good sensitivity (66%) and high specificity (90%), while a cutscore of ≥84 seconds to complete Color Trails 2 (CT2) was associated with a classification accuracy of 82%, good sensitivity (74%) and high specificity (90%). A CT1 cutscore of ≥58 seconds, and a CT2 cutscore ≥100 seconds was associated with 100% positive predictive power at base rates from 20 to 50%.

  1. Test validity and performance validity: considerations in providing a framework for development of an ability-focused neuropsychological test battery.

    PubMed

    Larrabee, Glenn J

    2014-11-01

    Literature on test validity and performance validity is reviewed to propose a framework for specification of an ability-focused battery (AFB). Factor analysis supports six domains of ability: first, verbal symbolic; secondly, visuoperceptual and visuospatial judgment and problem solving; thirdly, sensorimotor skills; fourthly, attention/working memory; fifthly, processing speed; finally, learning and memory (which can be divided into verbal and visual subdomains). The AFB should include at least three measures for each of the six domains, selected based on various criteria for validity including sensitivity to presence of disorder, sensitivity to severity of disorder, correlation with important activities of daily living, and containing embedded/derived measures of performance validity. Criterion groups should include moderate and severe traumatic brain injury, and Alzheimer's disease. Validation groups should also include patients with left and right hemisphere stroke, to determine measures sensitive to lateralized cognitive impairment and so that the moderating effects of auditory comprehension impairment and neglect can be analyzed on AFB measures. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Development and Initial Validation of the Performance Perfectionism Scale for Sport (PPS-S)

    ERIC Educational Resources Information Center

    Hill, Andrew P.; Appleton, Paul R.; Mallinson, Sarah H.

    2016-01-01

    Valid and reliable instruments are required to appropriately study perfectionism. With this in mind, three studies are presented that describe the development and initial validation of a new instrument designed to measure multidimensional performance perfectionism for use in sport (Performance Perfectionism Scale--Sport [PPS-S]). The instrument is…

  3. Verification and Validation of the BISON Fuel Performance Code for PCMI Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamble, Kyle Allan Lawrence; Novascone, Stephen Rhead; Gardner, Russell James

    2016-06-01

    BISON is a modern finite element-based nuclear fuel performance code that has been under development at Idaho National Laboratory (INL) since 2009. The code is applicable to both steady and transient fuel behavior and has been used to analyze a variety of fuel forms in 1D spherical, 2D axisymmetric, or 3D geometries. A brief overview of BISON’s computational framework, governing equations, and general material and behavioral models is provided. BISON code and solution verification procedures are described. Validation for application to light water reactor (LWR) PCMI problems is assessed by comparing predicted and measured rod diameter following base irradiation andmore » power ramps. Results indicate a tendency to overpredict clad diameter reduction early in life, when clad creepdown dominates, and more significantly overpredict the diameter increase late in life, when fuel expansion controls the mechanical response. Initial rod diameter comparisons have led to consideration of additional separate effects experiments to better understand and predict clad and fuel mechanical behavior. Results from this study are being used to define priorities for ongoing code development and validation activities.« less

  4. Supersonic Coaxial Jet Experiment for CFD Code Validation

    NASA Technical Reports Server (NTRS)

    Cutler, A. D.; Carty, A. A.; Doerner, S. E.; Diskin, G. S.; Drummond, J. P.

    1999-01-01

    A supersonic coaxial jet facility has been designed to provide experimental data suitable for the validation of CFD codes used to analyze high-speed propulsion flows. The center jet is of a light gas and the coflow jet is of air, and the mixing layer between them is compressible. Various methods have been employed in characterizing the jet flow field, including schlieren visualization, pitot, total temperature and gas sampling probe surveying, and RELIEF velocimetry. A Navier-Stokes code has been used to calculate the nozzle flow field and the results compared to the experiment.

  5. A reliability and validity study of the Palliative Performance Scale

    PubMed Central

    Ho, Francis; Lau, Francis; Downing, Michael G; Lesperance, Mary

    2008-01-01

    Background The Palliative Performance Scale (PPS) was first introduced in1996 as a new tool for measurement of performance status in palliative care. PPS has been used in many countries and has been translated into other languages. Methods This study evaluated the reliability and validity of PPS. A web-based, case scenarios study with a test-retest format was used to determine reliability. Fifty-three participants were recruited and randomly divided into two groups, each evaluating 11 cases at two time points. The validity study was based on the content validation of 15 palliative care experts conducted over telephone interviews, with discussion on five themes: PPS as clinical assessment tool, the usefulness of PPS, PPS scores affecting decision making, the problems in using PPS, and the adequacy of PPS instruction. Results The intraclass correlation coefficients for absolute agreement were 0.959 and 0.964 for Group 1, at Time-1 and Time-2; 0.951 and 0.931 for Group 2, at Time-1 and Time-2 respectively. Results showed that the participants were consistent in their scoring over the two times, with a mean Cohen's kappa of 0.67 for Group 1 and 0.71 for Group 2. In the validity study, all experts agreed that PPS is a valuable clinical assessment tool in palliative care. Many of them have already incorporated PPS as part of their practice standard. Conclusion The results of the reliability study demonstrated that PPS is a reliable tool. The validity study found that most experts did not feel a need to further modify PPS and, only two experts requested that some performance status measures be defined more clearly. Areas of PPS use include prognostication, disease monitoring, care planning, hospital resource allocation, clinical teaching and research. PPS is also a good communication tool between palliative care workers. PMID:18680590

  6. An Overlooked Population in Community College: International Students' (In)Validation Experiences With Academic Advising

    ERIC Educational Resources Information Center

    Zhang, Yi

    2016-01-01

    Objective: Guided by validation theory, this study aims to better understand the role that academic advising plays in international community college students' adjustment. More specifically, this study investigated how academic advising validates or invalidates their academic and social experiences in a community college context. Method: This…

  7. Validation Study of Unnotched Charpy and Taylor-Anvil Impact Experiments using Kayenta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamojjala, Krishna; Lacy, Jeffrey; Chu, Henry S.

    2015-03-01

    Validation of a single computational model with multiple available strain-to-failure fracture theories is presented through experimental tests and numerical simulations of the standardized unnotched Charpy and Taylor-anvil impact tests, both run using the same material model (Kayenta). Unnotched Charpy tests are performed on rolled homogeneous armor steel. The fracture patterns using Kayenta’s various failure options that include aleatory uncertainty and scale effects are compared against the experiments. Other quantities of interest include the average value of the absorbed energy and bend angle of the specimen. Taylor-anvil impact tests are performed on Ti6Al4V titanium alloy. The impact speeds of the specimenmore » are 321 m/s and 393 m/s. The goal of the numerical work is to reproduce the damage patterns observed in the laboratory. For the numerical study, the Johnson-Cook failure model is used as the ductile fracture criterion, and aleatory uncertainty is applied to rate-dependence parameters to explore its effect on the fracture patterns.« less

  8. Development of self and peer performance assessment on iodometric titration experiment

    NASA Astrophysics Data System (ADS)

    Nahadi; Siswaningsih, W.; Kusumaningtyas, H.

    2018-05-01

    This study aims to describe the process in developing of reliable and valid assessment to measure students’ performance on iodometric titration and the effect of the self and peer assessment on students’ performance. The self and peer-instrument provides valuable feedback for the student performance improvement. The developed assessment contains rubric and task for facilitating self and peer assessment. The participants are 24 students at the second-grade student in certain vocational high school in Bandung. The participants divided into two groups. The first 12 students involved in the validity test of the developed assessment, while the remain 12 students participated for the reliability test. The content validity was evaluated based on the judgment experts. Test result of content validity based on judgment expert show that the developed performance assessment instrument categorized as valid on each task with the realibity classified as very good. Analysis of the impact of the self and peer assessment implementation showed that the peer instrument supported the self assessment.

  9. Integration and Test Flight Validation Plans for the Pulsed Plasma Thruster Experiment on EO- 1

    NASA Technical Reports Server (NTRS)

    Zakrzwski, Charles; Benson, Scott; Sanneman, Paul; Hoskins, Andy; Bauer, Frank H. (Technical Monitor)

    2002-01-01

    The Pulsed Plasma Thruster (PPT) Experiment on the Earth Observing One (EO-1) spacecraft has been designed to demonstrate the capability of a new generation PPT to perform spacecraft attitude control. The PPT is a small, self-contained pulsed electromagnetic propulsion system capable of delivering high specific impulse (900-1200 s), very small impulse bits (10-1000 uN-s) at low average power (less than 1 to 100 W). Teflon fuel is ablated and slightly ionized by means of a capacitative discharge. The discharge also generates electromagnetic fields that accelerate the plasma by means of the Lorentz Force. EO-1 has a single PPT that can produce thrust in either the positive or negative pitch direction. The flight validation has been designed to demonstrate of the ability of the PPT to provide precision pointing accuracy, response and stability, and confirmation of benign plume and EMI effects. This paper will document the success of the flight validation.

  10. Review and evaluation of performance measures for survival prediction models in external validation settings.

    PubMed

    Rahman, M Shafiqur; Ambler, Gareth; Choodari-Oskooei, Babak; Omar, Rumana Z

    2017-04-18

    When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell's concordance measure which tended to increase as censoring increased. We recommend that Uno's concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller's measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston's D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive accuracy curves. In addition, we recommend to investigate the characteristics

  11. Development and Validation of the Basketball Offensive Game Performance Instrument

    ERIC Educational Resources Information Center

    Chen, Weiyun; Hendricks, Kristin; Zhu, Weimo

    2013-01-01

    The purpose of this study was to design and validate the Basketball Offensive Game Performance Instrument (BOGPI) that assesses an individual player's offensive game performance competency in basketball. Twelve physical education teacher education (PETE) students playing two 10-minute, 3 vs. 3 basketball games were videotaped at end of a…

  12. Display format, highlight validity, and highlight method: Their effects on search performance

    NASA Technical Reports Server (NTRS)

    Donner, Kimberly A.; Mckay, Tim D.; Obrien, Kevin M.; Rudisill, Marianne

    1991-01-01

    Display format and highlight validity were shown to affect visual display search performance; however, these studies were conducted on small, artificial displays of alphanumeric stimuli. A study manipulating these variables was conducted using realistic, complex Space Shuttle information displays. A 2x2x3 within-subjects analysis of variance found that search times were faster for items in reformatted displays than for current displays. Responses to valid applications of highlight were significantly faster than responses to non or invalidly highlighted applications. The significant format by highlight validity interaction showed that there was little difference in response time to both current and reformatted displays when the highlight validity was applied; however, under the non or invalid highlight conditions, search times were faster with reformatted displays. A separate within-subject analysis of variance of display format, highlight validity, and several highlight methods did not reveal a main effect of highlight method. In addition, observed display search times were compared to search time predicted by Tullis' Display Analysis Program. Benefits of highlighting and reformatting displays to enhance search and the necessity to consider highlight validity and format characteristics in tandem for predicting search performance are discussed.

  13. Assessment of predictive performance in incomplete data by combining internal validation and multiple imputation.

    PubMed

    Wahl, Simone; Boulesteix, Anne-Laure; Zierer, Astrid; Thorand, Barbara; van de Wiel, Mark A

    2016-10-26

    Missing values are a frequent issue in human studies. In many situations, multiple imputation (MI) is an appropriate missing data handling strategy, whereby missing values are imputed multiple times, the analysis is performed in every imputed data set, and the obtained estimates are pooled. If the aim is to estimate (added) predictive performance measures, such as (change in) the area under the receiver-operating characteristic curve (AUC), internal validation strategies become desirable in order to correct for optimism. It is not fully understood how internal validation should be combined with multiple imputation. In a comprehensive simulation study and in a real data set based on blood markers as predictors for mortality, we compare three combination strategies: Val-MI, internal validation followed by MI on the training and test parts separately, MI-Val, MI on the full data set followed by internal validation, and MI(-y)-Val, MI on the full data set omitting the outcome followed by internal validation. Different validation strategies, including bootstrap und cross-validation, different (added) performance measures, and various data characteristics are considered, and the strategies are evaluated with regard to bias and mean squared error of the obtained performance estimates. In addition, we elaborate on the number of resamples and imputations to be used, and adopt a strategy for confidence interval construction to incomplete data. Internal validation is essential in order to avoid optimism, with the bootstrap 0.632+ estimate representing a reliable method to correct for optimism. While estimates obtained by MI-Val are optimistically biased, those obtained by MI(-y)-Val tend to be pessimistic in the presence of a true underlying effect. Val-MI provides largely unbiased estimates, with a slight pessimistic bias with increasing true effect size, number of covariates and decreasing sample size. In Val-MI, accuracy of the estimate is more strongly improved by

  14. Reliability and Validity of the Professional Counseling Performance Evaluation

    ERIC Educational Resources Information Center

    Shepherd, J. Brad; Britton, Paula J.; Kress, Victoria E.

    2008-01-01

    The definition and measurement of counsellor trainee competency is an issue that has received increased attention yet lacks quantitative study. This research evaluates item responses, scale reliability and intercorrelations, interrater agreement, and criterion-related validity of the Professional Performance Fitness Evaluation/Professional…

  15. Do emotional intelligence and previous caring experience influence student nurse performance? A comparative analysis.

    PubMed

    Stenhouse, Rosie; Snowden, Austyn; Young, Jenny; Carver, Fiona; Carver, Hannah; Brown, Norrie

    2016-08-01

    Reports of poor nursing care have focused attention on values based selection of candidates onto nursing programmes. Values based selection lacks clarity and valid measures. Previous caring experience might lead to better care. Emotional intelligence (EI) might be associated with performance, is conceptualised and measurable. To examine the impact of 1) previous caring experience, 2) emotional intelligence 3) social connection scores on performance and retention in a cohort of first year nursing and midwifery students in Scotland. A longitudinal, quasi experimental design. Adult and mental health nursing, and midwifery programmes in a Scottish University. Adult, mental health and midwifery students (n=598) completed the Trait Emotional Intelligence Questionnaire-short form and Schutte's Emotional Intelligence Scale on entry to their programmes at a Scottish University, alongside demographic and previous caring experience data. Social connection was calculated from a subset of questions identified within the TEIQue-SF in a prior factor and Rasch analysis. Student performance was calculated as the mean mark across the year. Withdrawal data were gathered. 598 students completed baseline measures. 315 students declared previous caring experience, 277 not. An independent-samples t-test identified that those without previous caring experience scored higher on performance (57.33±11.38) than those with previous caring experience (54.87±11.19), a statistically significant difference of 2.47 (95% CI, 0.54 to 4.38), t(533)=2.52, p=.012. Emotional intelligence scores were not associated with performance. Social connection scores for those withdrawing (mean rank=249) and those remaining (mean rank=304.75) were statistically significantly different, U=15,300, z=-2.61, p$_amp_$lt;0.009. Previous caring experience led to worse performance in this cohort. Emotional intelligence was not a useful indicator of performance. Lower scores on the social connection factor were associated

  16. Music experience influences laparoscopic skills performance.

    PubMed

    Boyd, Tanner; Jung, Inkyung; Van Sickle, Kent; Schwesinger, Wayne; Michalek, Joel; Bingener, Juliane

    2008-01-01

    Music education affects the mathematical and visuo-spatial skills of school-age children. Visuo-spatial abilities have a significant effect on laparoscopic suturing performance. We hypothesize that prior music experience influences the performance of laparoscopic suturing tasks. Thirty novices observed a laparoscopic suturing task video. Each performed 3 timed suturing task trials. Demographics were recorded. A repeated measures linear mixed model was used to examine the effects of prior music experience on suturing task time. Twelve women and 18 men completed the tasks. When adjusted for video game experience, participants who currently played an instrument performed significantly faster than those who did not (P<0.001). The model showed a significant sex by instrument interaction. Men who had never played an instrument or were currently playing an instrument performed better than women in the same group (P=0.002 and P<0.001). There was no sex difference in the performance of participants who had played an instrument in the past (P=0.29). This study attempted to investigate the effect of music experience on the laparoscopic suturing abilities of surgical novices. The visuo-spatial abilities used in laparoscopic suturing may be enhanced in those involved in playing an instrument.

  17. Analysis and Ground Testing for Validation of the Inflatable Sunshield in Space (ISIS) Experiment

    NASA Technical Reports Server (NTRS)

    Lienard, Sebastien; Johnston, John; Adams, Mike; Stanley, Diane; Alfano, Jean-Pierre; Romanacci, Paolo

    2000-01-01

    The Next Generation Space Telescope (NGST) design requires a large sunshield to protect the large aperture mirror and instrument module from constant solar exposure at its L2 orbit. The structural dynamics of the sunshield must be modeled in order to predict disturbances to the observatory attitude control system and gauge effects on the line of site jitter. Models of large, non-linear membrane systems are not well understood and have not been successfully demonstrated. To answer questions about sunshield dynamic behavior and demonstrate controlled deployment, the NGST project is flying a Pathfinder experiment, the Inflatable Sunshield in Space (ISIS). This paper discusses in detail the modeling and ground-testing efforts performed at the Goddard Space Flight Center to: validate analytical tools for characterizing the dynamic behavior of the deployed sunshield, qualify the experiment for the Space Shuttle, and verify the functionality of the system. Included in the discussion will be test parameters, test setups, problems encountered, and test results.

  18. Solar power plant performance evaluation: simulation and experimental validation

    NASA Astrophysics Data System (ADS)

    Natsheh, E. M.; Albarbar, A.

    2012-05-01

    In this work the performance of solar power plant is evaluated based on a developed model comprise photovoltaic array, battery storage, controller and converters. The model is implemented using MATLAB/SIMULINK software package. Perturb and observe (P&O) algorithm is used for maximizing the generated power based on maximum power point tracker (MPPT) implementation. The outcome of the developed model are validated and supported by a case study carried out using operational 28.8kW grid-connected solar power plant located in central Manchester. Measurements were taken over 21 month's period; using hourly average irradiance and cell temperature. It was found that system degradation could be clearly monitored by determining the residual (the difference) between the output power predicted by the model and the actual measured power parameters. It was found that the residual exceeded the healthy threshold, 1.7kW, due to heavy snow in Manchester last winter. More important, the developed performance evaluation technique could be adopted to detect any other reasons that may degrade the performance of the P V panels such as shading and dirt. Repeatability and reliability of the developed system performance were validated during this period. Good agreement was achieved between the theoretical simulation and the real time measurement taken the online grid connected solar power plant.

  19. Validation of scintillometer measurements over a heterogeneous landscape: The LITFASS-2009 Experiment

    NASA Astrophysics Data System (ADS)

    Beyrich, F.; Bange, J.; Hartogensis, O.; Raasch, S.

    2009-09-01

    The turbulent exchange of heat and water vapour are essential land surface - atmosphere interaction processes in the local, regional and global energy and water cycles. Scintillometry can be considered as the only technique presently available for the quasi-operational experimental determination of area-averaged turbulent fluxes needed to validate the fluxes simulated by regional atmospheric models or derived from satellite images at a horizontal scale of a few kilometres. While scintillometry has found increasing application over the last years, some fundamental issues related to its use still need further investigation. In particular, no studies are known so far to reproduce the path-averaged structure parameters measured by scintillometers by independent measurements or modelling techniques. The LITFASS-2009 field experiment has been performed in the area around the Meteorological Observatory Lindenberg / Richard-Aßmann-Observatory in Germany during summer 2009. It was designed to investigate the spatial (horizontal and vertical) and temporal variability of structure parameters (underlying the scintillometer principle) over moderately heterogeneous terrain. The experiment essentially relied on a coupling of eddy-covariance measurements, scintillometry and airborne measurements with an unmanned autonomous aircraft able to strictly fly along the scintillometer path. Data interpretation will be supported by numerical modelling using a large-eddy simulation (LES) model. The paper will describe the design of the experiment. First preliminary results from the measurements will be presented.

  20. Examining students' views about validity of experiments: From introductory to Ph.D. students

    NASA Astrophysics Data System (ADS)

    Hu, Dehui; Zwickl, Benjamin M.

    2018-06-01

    We investigated physics students' epistemological views on measurements and validity of experimental results. The roles of experiments in physics have been underemphasized in previous research on students' personal epistemology, and there is a need for a broader view of personal epistemology that incorporates experiments. An epistemological framework incorporating the structure, methodology, and validity of scientific knowledge guided the development of an open-ended survey. The survey was administered to students in algebra-based and calculus-based introductory physics courses, upper-division physics labs, and physics Ph.D. students. Within our sample, we identified several differences in students' ideas about validity and uncertainty in measurement. The majority of introductory students justified the validity of results through agreement with theory or with results from others. Alternatively, Ph.D. students frequently justified the validity of results based on the quality of the experimental process and repeatability of results. When asked about the role of uncertainty analysis, introductory students tended to focus on the representational roles (e.g., describing imperfections, data variability, and human mistakes). However, advanced students focused on the inferential roles of uncertainty analysis (e.g., quantifying reliability, making comparisons, and guiding refinements). The findings suggest that lab courses could emphasize a variety of approaches to establish validity, such as by valuing documentation of the experimental process when evaluating the quality of student work. In order to emphasize the role of uncertainty in an authentic way, labs could provide opportunities to iterate, make repeated comparisons, and make decisions based on those comparisons.

  1. Human performance measurement: Validation procedures applicable to advanced manned telescience systems

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.

    1990-01-01

    As telescience systems become more and more complex, autonomous, and opaque to their operators it becomes increasingly difficult to determine whether the total system is performing as it should. Some of the complex and interrelated human performance measurement issues are addressed as they relate to total system validation. The assumption is made that human interaction with the automated system will be required well into the Space Station Freedom era. Candidate human performance measurement-validation techniques are discussed for selected ground-to-space-to-ground and space-to-space situations. Most of these measures may be used in conjunction with an information throughput model presented elsewhere (Haines, 1990). Teleoperations, teleanalysis, teleplanning, teledesign, and teledocumentation are considered, as are selected illustrative examples of space related telescience activities.

  2. Validity and reliability of the Short Physical Performance Battery (SPPB)

    PubMed Central

    Curcio, Carmen-Lucía; Alvarado, Beatriz; Zunzunegui, María Victoria; Guralnik, Jack

    2013-01-01

    Objectives: To assess the validity (convergent and construct) and reliability of the Short Physical Performance Battery (SPPB) among non-disabled adults between 65 to 74 years of age residing in the Andes Mountains of Colombia. Methods: Design Validation study; Participants: 150 subjects aged 65 to 74 years recruited from elderly associations (day-centers) in Manizales, Colombia. Measurements: The SPPB tests of balance, including time to walk 4 meters and time required to stand from a chair 5 times were administered to all participants. Reliability was analyzed with a 7-day interval between assessments and use of repeated ANOVA testing. Construct validity was assessed using factor analysis and by testing the relationship between SPPB and depressive symptoms, cognitive function, and self rated health (SRH), while the concurrent validity was measured through relationships with mobility limitations and disability in Activities of Daily Living (ADL). ANOVA tests were used to establish these associations. Results: Test-retest reliability of the SPPB was high: 0.87 (CI95%: 0.77-0.96). A one factor solution was found with three SPPB tests. SPPB was related to self-rated health, limitations in walking and climbing steps and to indicators of disability, as well as to cognitive function and depression. There was a graded decrease in the mean SPPB score with increasing disability and poor health. Conclusion: The Spanish version of SPPB is reliable and valid to assess physical performance among older adults from our region. Future studies should establish their clinical applications and explore usage in population studies. PMID:24892614

  3. Simulation verification techniques study: Simulation performance validation techniques document. [for the space shuttle system

    NASA Technical Reports Server (NTRS)

    Duncan, L. M.; Reddell, J. P.; Schoonmaker, P. B.

    1975-01-01

    Techniques and support software for the efficient performance of simulation validation are discussed. Overall validation software structure, the performance of validation at various levels of simulation integration, guidelines for check case formulation, methods for real time acquisition and formatting of data from an all up operational simulator, and methods and criteria for comparison and evaluation of simulation data are included. Vehicle subsystems modules, module integration, special test requirements, and reference data formats are also described.

  4. Design Characteristics Influence Performance of Clinical Prediction Rules in Validation: A Meta-Epidemiological Study.

    PubMed

    Ban, Jong-Wook; Emparanza, José Ignacio; Urreta, Iratxe; Burls, Amanda

    2016-01-01

    Many new clinical prediction rules are derived and validated. But the design and reporting quality of clinical prediction research has been less than optimal. We aimed to assess whether design characteristics of validation studies were associated with the overestimation of clinical prediction rules' performance. We also aimed to evaluate whether validation studies clearly reported important methodological characteristics. Electronic databases were searched for systematic reviews of clinical prediction rule studies published between 2006 and 2010. Data were extracted from the eligible validation studies included in the systematic reviews. A meta-analytic meta-epidemiological approach was used to assess the influence of design characteristics on predictive performance. From each validation study, it was assessed whether 7 design and 7 reporting characteristics were properly described. A total of 287 validation studies of clinical prediction rule were collected from 15 systematic reviews (31 meta-analyses). Validation studies using case-control design produced a summary diagnostic odds ratio (DOR) 2.2 times (95% CI: 1.2-4.3) larger than validation studies using cohort design and unclear design. When differential verification was used, the summary DOR was overestimated by twofold (95% CI: 1.2 -3.1) compared to complete, partial and unclear verification. The summary RDOR of validation studies with inadequate sample size was 1.9 (95% CI: 1.2 -3.1) compared to studies with adequate sample size. Study site, reliability, and clinical prediction rule was adequately described in 10.1%, 9.4%, and 7.0% of validation studies respectively. Validation studies with design shortcomings may overestimate the performance of clinical prediction rules. The quality of reporting among studies validating clinical prediction rules needs to be improved.

  5. The Experience of Cognitive Intrusion of Pain: scale development and validation

    PubMed Central

    Attridge, Nina; Crombez, Geert; Van Ryckeghem, Dimitri; Keogh, Edmund; Eccleston, Christopher

    2015-01-01

    Abstract Patients with chronic pain often report their cognition to be impaired by pain, and this observation has been supported by numerous studies measuring the effects of pain on cognitive task performance. Furthermore, cognitive intrusion by pain has been identified as one of 3 components of pain anxiety, alongside general distress and fear of pain. Although cognitive intrusion is a critical characteristic of pain, no specific measure designed to capture its effects exists. In 3 studies, we describe the initial development and validation of a new measure of pain interruption: the Experience of Cognitive Intrusion of Pain (ECIP) scale. In study 1, the ECIP scale was administered to a general population sample to assess its structure and construct validity. In study 2, the factor structure of the ECIP scale was confirmed in a large general population sample experiencing no pain, acute pain, or chronic pain. In study 3, we examined the predictive value of the ECIP scale in pain-related disability in fibromyalgia patients. The ECIP scale scores followed a normal distribution with good variance in a general population sample. The scale had high internal reliability and a clear 1-component structure. It differentiated between chronic pain and control groups, and it was a significant predictor of pain-related disability over and above pain intensity. Repairing attentional interruption from pain may become a novel target for pain management interventions, both pharmacologic and nonpharmacologic. PMID:26067388

  6. The inventory for déjà vu experiences assessment. Development, utility, reliability, and validity.

    PubMed

    Sno, H N; Schalken, H F; de Jonghe, F; Koeter, M W

    1994-01-01

    In this article the development, utility, reliability, and validity of the Inventory for Déjà vu Experiences Assessment (IDEA) are described. The IDEA is a 23-item self-administered questionnaire consisting of a general section of nine questions and qualitative section of 14 questions. The latter questions comprise 48 topics. The questionnaire appeared to be a user-friendly instrument with satisfactory to good reliability and validity. The IDEA permits the study of quantitative and qualitative characteristics of déjà vu experiences.

  7. Validation of the U-238 inelastic scattering neutron cross section through the EXCALIBUR dedicated experiment

    NASA Astrophysics Data System (ADS)

    Leconte, Pierre; Bernard, David

    2017-09-01

    EXCALIBUR is an integral transmission experiment based on the fast neutron source produced by the bare highly enriched fast burst reactor CALIBAN, located in CEA/DAM Valduc (France). Two experimental campaigns have been performed, one using a sphere of diameter 17 cm and one using two cylinders of 17 cm diameter 9 cm height, both made of metallic Uranium 238. A set of 15 different dosimeters with specific threshold energies have been employed to provide information on the neutron flux attenuation as a function of incident energy. Measurements uncertainties are typically in the range of 0.5-3% (1σ). The analysis of these experiments is performed with the TRIPOLI4 continuous energy Monte Carlo code. A calculation benchmark with validated simplifications is defined in order to improve the statistical convergence under 2%. Various 238U evaluations have been tested: JEFF-3.1.1, ENDF/B-VII.1 and the IB36 evaluation from IAEA. A sensitivity analysis is presented to identify the contribution of each reaction cross section to the integral transmission rate. This feedback may be of interest for the international effort on 238U, through the CIELO project.

  8. Design Characteristics Influence Performance of Clinical Prediction Rules in Validation: A Meta-Epidemiological Study

    PubMed Central

    Ban, Jong-Wook; Emparanza, José Ignacio; Urreta, Iratxe; Burls, Amanda

    2016-01-01

    Background Many new clinical prediction rules are derived and validated. But the design and reporting quality of clinical prediction research has been less than optimal. We aimed to assess whether design characteristics of validation studies were associated with the overestimation of clinical prediction rules’ performance. We also aimed to evaluate whether validation studies clearly reported important methodological characteristics. Methods Electronic databases were searched for systematic reviews of clinical prediction rule studies published between 2006 and 2010. Data were extracted from the eligible validation studies included in the systematic reviews. A meta-analytic meta-epidemiological approach was used to assess the influence of design characteristics on predictive performance. From each validation study, it was assessed whether 7 design and 7 reporting characteristics were properly described. Results A total of 287 validation studies of clinical prediction rule were collected from 15 systematic reviews (31 meta-analyses). Validation studies using case-control design produced a summary diagnostic odds ratio (DOR) 2.2 times (95% CI: 1.2–4.3) larger than validation studies using cohort design and unclear design. When differential verification was used, the summary DOR was overestimated by twofold (95% CI: 1.2 -3.1) compared to complete, partial and unclear verification. The summary RDOR of validation studies with inadequate sample size was 1.9 (95% CI: 1.2 -3.1) compared to studies with adequate sample size. Study site, reliability, and clinical prediction rule was adequately described in 10.1%, 9.4%, and 7.0% of validation studies respectively. Conclusion Validation studies with design shortcomings may overestimate the performance of clinical prediction rules. The quality of reporting among studies validating clinical prediction rules needs to be improved. PMID:26730980

  9. Five-Kilometers Time Trial: Preliminary Validation of a Short Test for Cycling Performance Evaluation.

    PubMed

    Dantas, Jose Luiz; Pereira, Gleber; Nakamura, Fabio Yuzo

    2015-09-01

    The five-kilometer time trial (TT5km) has been used to assess aerobic endurance performance without further investigation of its validity. This study aimed to perform a preliminary validation of the TT5km to rank well-trained cyclists based on aerobic endurance fitness and assess changes of the aerobic endurance performance. After the incremental test, 20 cyclists (age = 31.3 ± 7.9 years; body mass index = 22.7 ± 1.5 kg/m(2); maximal aerobic power = 360.5 ± 49.5 W) performed the TT5km twice, collecting performance (time to complete, absolute and relative power output, average speed) and physiological responses (heart rate and electromyography activity). The validation criteria were pacing strategy, absolute and relative reliability, validity, and sensitivity. Sensitivity index was obtained from the ratio between the smallest worthwhile change and typical error. The TT5km showed high absolute (coefficient of variation < 3%) and relative (intraclass coefficient correlation > 0.95) reliability of performance variables, whereas it presented low reliability of physiological responses. The TT5km performance variables were highly correlated with the aerobic endurance indices obtained from incremental test (r > 0.70). These variables showed adequate sensitivity index (> 1). TT5km is a valid test to rank the aerobic endurance fitness of well-trained cyclists and to differentiate changes on aerobic endurance performance. Coaches can detect performance changes through either absolute (± 17.7 W) or relative power output (± 0.3 W.kg(-1)), the time to complete the test (± 13.4 s) and the average speed (± 1.0 km.h(-1)). Furthermore, TT5km performance can also be used to rank the athletes according to their aerobic endurance fitness.

  10. Derivation and Cross-Validation of Cutoff Scores for Patients With Schizophrenia Spectrum Disorders on WAIS-IV Digit Span-Based Performance Validity Measures.

    PubMed

    Glassmire, David M; Toofanian Ross, Parnian; Kinney, Dominique I; Nitch, Stephen R

    2016-06-01

    Two studies were conducted to identify and cross-validate cutoff scores on the Wechsler Adult Intelligence Scale-Fourth Edition Digit Span-based embedded performance validity (PV) measures for individuals with schizophrenia spectrum disorders. In Study 1, normative scores were identified on Digit Span-embedded PV measures among a sample of patients (n = 84) with schizophrenia spectrum diagnoses who had no known incentive to perform poorly and who put forth valid effort on external PV tests. Previously identified cutoff scores resulted in unacceptable false positive rates and lower cutoff scores were adopted to maintain specificity levels ≥90%. In Study 2, the revised cutoff scores were cross-validated within a sample of schizophrenia spectrum patients (n = 96) committed as incompetent to stand trial. Performance on Digit Span PV measures was significantly related to Full Scale IQ in both studies, indicating the need to consider the intellectual functioning of examinees with psychotic spectrum disorders when interpreting scores on Digit Span PV measures. © The Author(s) 2015.

  11. Evaluation of Fission Product Critical Experiments and Associated Biases for Burnup Credit Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Don; Rearden, Bradley T; Reed, Davis Allan

    2010-01-01

    One of the challenges associated with implementation of burnup credit is the validation of criticality calculations used in the safety evaluation; in particular the availability and use of applicable critical experiment data. The purpose of the validation is to quantify the relationship between reality and calculated results. Validation and determination of bias and bias uncertainty require the identification of sets of critical experiments that are similar to the criticality safety models. A principal challenge for crediting fission products (FP) in a burnup credit safety evaluation is the limited availability of relevant FP critical experiments for bias and bias uncertainty determination.more » This paper provides an evaluation of the available critical experiments that include FPs, along with bounding, burnup-dependent estimates of FP biases generated by combining energy dependent sensitivity data for a typical burnup credit application with the nuclear data uncertainty information distributed with SCALE 6. A method for determining separate bias and bias uncertainty values for individual FPs and illustrative results is presented. Finally, a FP bias calculation method based on data adjustment techniques and reactivity sensitivity coefficients calculated with the SCALE sensitivity/uncertainty tools and some typical results is presented. Using the methods described in this paper, the cross-section bias for a representative high-capacity spent fuel cask associated with the ENDF/B-VII nuclear data for 16 most important stable or near stable FPs is predicted to be no greater than 2% of the total worth of the 16 FPs, or less than 0.13 % k/k.« less

  12. Science objectives and performance of a radiometer and window design for atmospheric entry experiments

    NASA Technical Reports Server (NTRS)

    Craig, Roger A.; Davy, William C.; Whiting, Ellis E.

    1994-01-01

    The Radiative Heating Experiment, RHE, aboard the Aeroassist Flight Experiment, AFE, (now cancelled) was to make in-situ measurements of the stagnation region shock layer radiation during an aerobraking maneuver from geosynchronous to low earth orbit. The measurements were to provide a data base to help develop and validate aerothermodynamic computational models. Although cancelled, much work was done to develop the science requirements and to successfully meet RHE technical challenges. This paper discusses the RHE scientific objectives and expected science performance of a small sapphire window for the RHE radiometers. The spectral range required was from 170 to 900 nm. The window size was based on radiometer sensitivity requirements including capability of on-orbit solar calibration.

  13. Functional performance testing of the hip in athletes: a systematic review for reliability and validity.

    PubMed

    Kivlan, Benjamin R; Martin, Robroy L

    2012-08-01

    The purpose of this study was to systematically review the literature for functional performance tests with evidence of reliability and validity that could be used for a young, athletic population with hip dysfunction. A search of PubMed and SPORTDiscus databases were performed to identify movement, balance, hop/jump, or agility functional performance tests from the current peer-reviewed literature used to assess function of the hip in young, athletic subjects. The single-leg stance, deep squat, single-leg squat, and star excursion balance tests (SEBT) demonstrated evidence of validity and normative data for score interpretation. The single-leg stance test and SEBT have evidence of validity with association to hip abductor function. The deep squat test demonstrated evidence as a functional performance test for evaluating femoroacetabular impingement. Hop/Jump tests and agility tests have no reported evidence of reliability or validity in a population of subjects with hip pathology. Use of functional performance tests in the assessment of hip dysfunction has not been well established in the current literature. Diminished squat depth and provocation of pain during the single-leg balance test have been associated with patients diagnosed with FAI and gluteal tendinopathy, respectively. The SEBT and single-leg squat tests provided evidence of convergent validity through an analysis of kinematics and muscle function in normal subjects. Reliability of functional performance tests have not been established on patients with hip dysfunction. Further study is needed to establish reliability and validity of functional performance tests that can be used in a young, athletic population with hip dysfunction. 2b (Systematic Review of Literature).

  14. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance.

    PubMed

    Kepes, Sven; McDaniel, Michael A

    2015-01-01

    Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation.

  15. Observing System Simulation Experiments

    NASA Technical Reports Server (NTRS)

    Prive, Nikki

    2015-01-01

    This presentation gives an overview of Observing System Simulation Experiments (OSSEs). The components of an OSSE are described, along with discussion of the process for validating, calibrating, and performing experiments. a.

  16. The Development and Validation of a Life Experience Inventory for the Identification of Creative Electrical Engineers.

    ERIC Educational Resources Information Center

    Michael, William B.; Colson, Kenneth R.

    1979-01-01

    The construction and validation of the Life Experience Inventory (LEI) for the identification of creative electrical engineers are described. Using the number of patents held or pending as a criterion measure, the LEI was found to have high concurrent validity. (JKS)

  17. Performance Validity Testing in Neuropsychology: Methods for Measurement Development and Maximizing Diagnostic Accuracy.

    PubMed

    Wodushek, Thomas R; Greher, Michael R

    2017-05-01

    In the first column in this 2-part series, Performance Validity Testing in Neuropsychology: Scientific Basis and Clinical Application-A Brief Review, the authors introduced performance validity tests (PVTs) and their function, provided a justification for why they are necessary, traced their ongoing endorsement by neuropsychological organizations, and described how they are used and interpreted by ever increasing numbers of clinical neuropsychologists. To enhance readers' understanding of these measures, this second column briefly describes common detection strategies used in PVTs as well as the typical methods used to validate new PVTs and determine cut scores for valid/invalid determinations. We provide a discussion of the latest research demonstrating how neuropsychologists can combine multiple PVTs in a single battery to improve sensitivity/specificity to invalid responding. Finally, we discuss future directions for the research and application of PVTs.

  18. Structured Uncertainty Bound Determination From Data for Control and Performance Validation

    NASA Technical Reports Server (NTRS)

    Lim, Kyong B.

    2003-01-01

    This report attempts to document the broad scope of issues that must be satisfactorily resolved before one can expect to methodically obtain, with a reasonable confidence, a near-optimal robust closed loop performance in physical applications. These include elements of signal processing, noise identification, system identification, model validation, and uncertainty modeling. Based on a recently developed methodology involving a parameterization of all model validating uncertainty sets for a given linear fractional transformation (LFT) structure and noise allowance, a new software, Uncertainty Bound Identification (UBID) toolbox, which conveniently executes model validation tests and determine uncertainty bounds from data, has been designed and is currently available. This toolbox also serves to benchmark the current state-of-the-art in uncertainty bound determination and in turn facilitate benchmarking of robust control technology. To help clarify the methodology and use of the new software, two tutorial examples are provided. The first involves the uncertainty characterization of a flexible structure dynamics, and the second example involves a closed loop performance validation of a ducted fan based on an uncertainty bound from data. These examples, along with other simulation and experimental results, also help describe the many factors and assumptions that determine the degree of success in applying robust control theory to practical problems.

  19. The Validity and Reliability of a Performance Assessment Procedure in Ice Hockey

    ERIC Educational Resources Information Center

    Nadeau, Luc; Richard, Jean-Francois; Godbout, Paul

    2008-01-01

    Background: Coaches and physical educators must obtain valid data relating to the contribution of each of their players in order to assess their level of performance in team sport competition. This information must also be collected and used in real game situations to be more valid. Developed initially for a physical education class context, the…

  20. Victoria Symptom Validity Test performance in children and adolescents with neurological disorders.

    PubMed

    Brooks, Brian L

    2012-12-01

    It is becoming increasingly more important to study, use, and promote the utility of measures that are designed to detect non-compliance with testing (i.e., poor effort, symptom non-validity, response bias) as part of neuropsychological assessments with children and adolescents. Several measures have evidence for use in pediatrics, but there is a paucity of published support for the Victoria Symptom Validity Test (VSVT) in this population. The purpose of this study was to examine the performance on the VSVT in a sample of pediatric patients with known neurological disorders. The sample consisted of 100 consecutively referred children and adolescents between the ages of 6 and 19 years (mean = 14.0, SD = 3.1) with various neurological diagnoses. On the VSVT total items, 95% of the sample had performance in the "valid" range, with 5% being deemed "questionable" and 0% deemed "invalid". On easy items, 97% were "valid", 2% were "questionable", and 1% was "invalid." For difficult items, 84% were "valid," 16% were "questionable," and 0% was "invalid." For those patients given two effort measures (i.e., VSVT and Test of Memory Malingering; n = 65), none was identified as having poor test-taking compliance on both measures. VSVT scores were significantly correlated with age, intelligence, processing speed, and functional ratings of daily abilities (attention, executive functioning, and adaptive functioning), but not objective performance on the measure of sustained attention, verbal memory, or visual memory. The VSVT has potential to be used in neuropsychological assessments with pediatric patients.

  1. Verification and Validation of Requirements on the CEV Parachute Assembly System Using Design of Experiments

    NASA Technical Reports Server (NTRS)

    Schulte, Peter Z.; Moore, James W.

    2011-01-01

    The Crew Exploration Vehicle Parachute Assembly System (CPAS) project conducts computer simulations to verify that flight performance requirements on parachute loads and terminal rate of descent are met. Design of Experiments (DoE) provides a systematic method for variation of simulation input parameters. When implemented and interpreted correctly, a DoE study of parachute simulation tools indicates values and combinations of parameters that may cause requirement limits to be violated. This paper describes one implementation of DoE that is currently being developed by CPAS, explains how DoE results can be interpreted, and presents the results of several preliminary studies. The potential uses of DoE to validate parachute simulation models and verify requirements are also explored.

  2. Validation of the Child HCAHPS survey to measure pediatric inpatient experience of care in Flanders.

    PubMed

    Bruyneel, Luk; Coeckelberghs, Ellen; Buyse, Gunnar; Casteels, Kristina; Lommers, Barbara; Vandersmissen, Jo; Van Eldere, Johan; Van Geet, Chris; Vanhaecht, Kris

    2017-07-01

    The recently developed Child HCAHPS provides a standard to measure US hospitals' performance on pediatric inpatient experiences of care. We field-tested Child HCAHPS in Belgium to instigate international comparison. In the development stage, forward/backward translation was conducted and patients assessed content validity index as excellent. The draft Flemish Child HCAHPS included 63 items: 38 items for five topics hypothesized to be similar to those proposed in the US (communication with parent, communication with child, attention to safety and comfort, hospital environment, and global rating), 10 screeners, a 14-item demographic and descriptive section, and one open-ended item. A 6-week pilot test was subsequently performed in three pediatric wards (general ward, hematology and oncology ward, infant and toddler ward) at a JCI-accredited university hospital. An overall response rate of 90.99% (303/333) was achieved and was consistent across wards. Confirmatory factor analysis largely confirmed the configuration of the proposed composites. Composite and single-item measures related well to patients' global rating of the hospital. Interpretation of different patient experiences across types of wards merits further investigation. Child HCAHPS provides an opportunity for systematic and cross-national assessment of pediatric inpatient experiences. Sharing and implementing international best practices are the next logical step. What is Known: • Patient experience surveys are increasingly used to reflect on the quality, safety, and centeredness of patient care. • While adult inpatient experience surveys are routinely used across countries around the world, the measurement of pediatric inpatient experiences is a young field of research that is essential to reflect on family-centered care. What is New: • We demonstrate that the US-developed Child HCAHPS provides an opportunity for international benchmarking of pediatric inpatient experiences with care through parents

  3. FUNCTIONAL PERFORMANCE TESTING OF THE HIP IN ATHLETES: A SYSTEMATIC REVIEW FOR RELIABILITY AND VALIDITY

    PubMed Central

    Martin, RobRoy L.

    2012-01-01

    Purpose/Background: The purpose of this study was to systematically review the literature for functional performance tests with evidence of reliability and validity that could be used for a young, athletic population with hip dysfunction. Methods: A search of PubMed and SPORTDiscus databases were performed to identify movement, balance, hop/jump, or agility functional performance tests from the current peer-reviewed literature used to assess function of the hip in young, athletic subjects. Results: The single-leg stance, deep squat, single-leg squat, and star excursion balance tests (SEBT) demonstrated evidence of validity and normative data for score interpretation. The single-leg stance test and SEBT have evidence of validity with association to hip abductor function. The deep squat test demonstrated evidence as a functional performance test for evaluating femoroacetabular impingement. Hop/Jump tests and agility tests have no reported evidence of reliability or validity in a population of subjects with hip pathology. Conclusions: Use of functional performance tests in the assessment of hip dysfunction has not been well established in the current literature. Diminished squat depth and provocation of pain during the single-leg balance test have been associated with patients diagnosed with FAI and gluteal tendinopathy, respectively. The SEBT and single-leg squat tests provided evidence of convergent validity through an analysis of kinematics and muscle function in normal subjects. Reliability of functional performance tests have not been established on patients with hip dysfunction. Further study is needed to establish reliability and validity of functional performance tests that can be used in a young, athletic population with hip dysfunction. Level of Evidence: 2b (Systematic Review of Literature) PMID:22893860

  4. Performing a Content Validation Study.

    ERIC Educational Resources Information Center

    Spool, Mark D.

    Content validity is concerned with three components: (1) the job content; (2) the test content, and (3) the strength of the relationship between the two. A content validation study, to be considered adequate and defensible should include at least the following four procedures: (1) A thorough and accurate job analysis (to define the job content);…

  5. Reliability and validity of the Assessment of Daily Activity Performance (ADAP) in community-dwelling older women.

    PubMed

    de Vreede, Paul L; Samson, Monique M; van Meeteren, Nico L; Duursma, Sijmen A; Verhaar, Harald J

    2006-08-01

    The Assessment of Daily Activity Performance (ADAP) test was developed, and modeled after the Continuous-scale Physical Functional Performance (CS-PFP) test, to provide a quantitative assessment of older adults' physical functional performance. The aim of this study was to determine the intra-examiner reliability and construct validity of the ADAP in a community-living older population, and to identify the importance of tester experience. Forty-three community-dwelling, older women (mean age 75 yr +/-4.3) were randomized to the test-retest reliability study (n=19) or validation study (n=24). The intra-examiner reliability of an experienced (tester 1) and an inexperienced tester (tester 2) was assessed by comparing test and retest scores of 19 participants. Construct validity was assessed by comparing the ADAP scores of 24 participants with self-perceived function by the SF-36 Health Survey, muscle function tests, and the Timed Up and Go test (TUG). Tester 1 had good consistency and reliability scores (mean difference between test and retest scores (DIF), -1.05+/-1.99; 95% confidence interval (CI), -2.58 to 0.48; Cronbach's alpha (alpha) range, 0.83 to 0.98; intraclass correlation (ICC) range, 0.75 to 0.96; Limits of Agreement (LoA), -2.58 to 4.95). Tester 2 had lower reliability scores (DIF, -2.45+/-4.36; 95% CI, -5.56 to 0.67; alpha range, 0.53 to 0.94; ICC range, 0.36 to 0.90; LoA, -6.09 to 10.99), with a systematic difference between test and retest scores for the ADAP domain lower-body strength (-3.81; 95% CI, -6.09 to -1.54), ADAP correlated with SF-36 Physical Functioning scale (r=0.67), TUG test (r=-0.91) and with isometric knee extensor strength (r=0.80). The ADAP test is a reliable and valid instrument. Our results suggest that testers should practise using the test, to improve reliability, before applying it to clinical settings.

  6. Free Radicals and Reactive Intermediates for the SAGE III Ozone Loss and Validation Experiment (SOLVE) Mission

    NASA Technical Reports Server (NTRS)

    Anderson, James G.

    2001-01-01

    This grant provided partial support for participation in the SAGE III Ozone Loss and Validation Experiment. The NASA-sponsored SOLVE mission was conducted Jointly with the European Commission-sponsored Third European Stratospheric Experiment on Ozone (THESEO 2000). Researchers examined processes that control ozone amounts at mid to high latitudes during the arctic winter and acquired correlative data needed to validate the Stratospheric Aerosol and Gas Experiment (SAGE) III satellite measurements that are used to quantitatively assess high-latitude ozone loss. The campaign began in September 1999 with intercomparison flights out of NASA Dryden Flight Research Center in Edwards. CA. and continued through March 2000. with midwinter deployments out of Kiruna. Sweden. SOLVE was co-sponsored by the Upper Atmosphere Research Program (UARP). Atmospheric Effects of Aviation Project (AEAP). Atmospheric Chemistry Modeling and Analysis Program (ACMAP). and Earth Observing System (EOS) of NASA's Earth Science Enterprise (ESE) as part of the validation program for the SAGE III instrument.

  7. The Validity and Incremental Validity of Knowledge Tests, Low-Fidelity Simulations, and High-Fidelity Simulations for Predicting Job Performance in Advanced-Level High-Stakes Selection

    ERIC Educational Resources Information Center

    Lievens, Filip; Patterson, Fiona

    2011-01-01

    In high-stakes selection among candidates with considerable domain-specific knowledge and experience, investigations of whether high-fidelity simulations (assessment centers; ACs) have incremental validity over low-fidelity simulations (situational judgment tests; SJTs) are lacking. Therefore, this article integrates research on the validity of…

  8. Description of the CERES Ocean Validation Experiment (COVE), A Dedicated EOS Validation Test Site

    NASA Astrophysics Data System (ADS)

    Rutledge, K.; Charlock, T.; Smith, B.; Jin, Z.; Rose, F.; Denn, F.; Rutan, D.; Haeffelin, M.; Su, W.; Xhang, T.; Jay, M.

    2001-12-01

    A unique test site located in the mid-Atlantic coastal marine waters has been used by several EOS projects for validation measurements. A common theme across these projects is the need for a stable measurement site within the marine environment for long-term, high quality radiation measurements. The site was initiated by NASA's Clouds and the Earths Radiant Energy System (CERES) project. One of CERES's challenging goals is to provide upwelling and downwelling shortwave fluxes at several pressure altitudes within the atmosphere and at the surface. Operationally the radiative transfer model of Fu and Liou (1996, 1998), the CERES instrument measured radiances and various other EOS platform data are being used to accomplish this goal. We present here, a component of the CERES/EOS validation effort that is focused to verify and optimize the prediction algorithms for radiation parameters associated with the marine coastal and oceanic surface types of the planet. For this validation work, the CERES Ocean Validation Experiment (COVE) was developed to provide detailed high-frequency and long-duration measurements for radiation and their associated dependent variables. The CERES validations also include analytical efforts which will not be described here (but see Charlock et.al, Su et.al., Smith et.al-Fall 2001 AGU Meeting) The COVE activity is based on a rigid ocean platform which is located approximately twenty kilometers off of the coast of Virginia Beach, Virginia. The once-manned US Coast Guard facility rises 35 meters from the ocean surface allowing the radiation instruments to be well above the splash zone. The depth of the sea is eleven meters at the site. A power and communications system has been installed for present and future requirements. Scientific measurements at the site have primarily been developed within the framework of established national and international monitoring programs. These include the Baseline Surface Radiation Network of the World

  9. Population Spotting Using Big Data: Validating the Human Performance Concept of Operations Analytic Vision

    DTIC Science & Technology

    2017-01-01

    AFRL-SA-WP-SR-2017-0001 Population Spotting Using “ Big Data ”: Validating the Human Performance Concept of Operations Analytic Vision...TITLE AND SUBTITLE Population Spotting Using “ Big Data ”: Validating the Human Performance Concept of Operations Analytic Vision 5a. CONTRACT...STINFO COPY NOTICE AND SIGNATURE PAGE Using Government drawings, specifications, or other data included in this document for any

  10. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance

    PubMed Central

    2015-01-01

    Introduction Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. Methods To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Results Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. Conclusion The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation. PMID:26517553

  11. Reflective oxygen saturation monitoring at hypothenar and its validation by human hypoxia experiment.

    PubMed

    Guo, Tao; Cao, Zhengtao; Zhang, Zhengbo; Li, Deyu; Yu, Mengsun

    2015-08-05

    Pulse oxygen saturation (SpO2) is an important parameter for healthcare, and wearable sensors and systems for SpO2 monitoring have become increasingly popular. The aim of this paper is to develop a novel SpO2 monitoring system, which detects photoplethysmographic (PPG) signals at hypothenar with a reflection-mode sensor embedded into a glove. A special photo-detector section was designed with two photodiodes arranged symmetrically to the red and infrared light-emitting diodes (LED) to enhance the signal quality. The reflective sensor was placed in a soft silicon substrate sewn in a glove to fit the surface of the hypothenar. To lower the power consumption, the LED driving current was reduced and energy-efficient electronic components were applied. The performance for PPG signal detection and SpO2 monitoring was evaluated by human hypoxia experiments. Accelerometer-based adaptive noise cancellation (ANC) methods applying the least mean squares (LMS) and recursive least squares (RLS) algorithms were studied to suppress motion artifact. A total of 20 subjects participated in the hypoxia experiment. The degree of comfort for wearing this system was accepted by them. The PPG signals were detected effectively at SpO2 levels from about 100-70%. The experiment validated the accuracy of the system was 2.34%, compared to the invasive measurements. Both the LMS and RLS algorithms improved the performance during motion. The total current consumed by the system was only 8 mA. It is feasible to detect PPG signal and monitor SpO2 at the location of hypothenar. This novel system can achieve reliable SpO2 measurements at different SpO2 levels and on different individuals. The system is light-weighted, easy to wear and power-saving. It has the potential to be a solution for wearable monitoring, although more work should be conducted to improve the motion-resistant performance significantly.

  12. Performance validation of the ANSER control laws for the F-18 HARV

    NASA Technical Reports Server (NTRS)

    Messina, Michael D.

    1995-01-01

    The ANSER control laws were implemented in Ada by NASA Dryden for flight test on the High Alpha Research Vehicle (HARV). The Ada implementation was tested in the hardware-in-the-loop (HIL) simulation, and results were compared to those obtained with the NASA Langley batch Fortran implementation of the control laws which are considered the 'truth model.' This report documents the performance validation test results between these implementations. This report contains the ANSER performance validation test plan, HIL versus batch time-history comparisons, simulation scripts used to generate checkcases, and detailed analysis of discrepancies discovered during testing.

  13. Performance validation of the ANSER Control Laws for the F-18 HARV

    NASA Technical Reports Server (NTRS)

    Messina, Michael D.

    1995-01-01

    The ANSER control laws were implemented in Ada by NASA Dryden for flight test on the High Alpha Research Vehicle (HARV). The Ada implementation was tested in the hardware-in-the-loop (HIL) simulation, and results were compared to those obtained with the NASA Langley batch Fortran implementation of the control laws which are considered the 'truth model'. This report documents the performance validation test results between these implementations. This report contains the ANSER performance validation test plan, HIL versus batch time-history comparisons, simulation scripts used to generate checkcases, and detailed analysis of discrepancies discovered during testing.

  14. The Zero Boil-Off Tank Experiment Ground Testing and Verification of Fluid and Thermal Performance

    NASA Technical Reports Server (NTRS)

    Chato, David J.; Kassemi, Mohammad; Kahwaji, Michel; Kieckhafer, Alexander

    2016-01-01

    The Zero Boil-Off Technology (ZBOT) Experiment involves performing a small scale International Space Station (ISS) experiment to study tank pressurization and pressure control in microgravity. The ZBOT experiment consists of a vacuum jacketed test tank filled with an inert fluorocarbon simulant liquid. Heaters and thermo-electric coolers are used in conjunction with an axial jet mixer flow loop to study a range of thermal conditions within the tank. The objective is to provide a high quality database of low gravity fluid motions and thermal transients which will be used to validate Computational Fluid Dynamic (CFD) modeling. This CFD can then be used in turn to predict behavior in larger systems with cryogens. This paper will discuss the work that has been done to demonstrate that the ZBOT experiment is capable of performing the functions required to produce a meaningful and accurate results, prior to its launch to the International Space Station. Main systems discussed are expected to include the thermal control system, the optical imaging system, and the tank filling system.This work is sponsored by NASAs Human Exploration Mission Directorates Physical Sciences Research program.

  15. Predicting Pilot Error in Nextgen: Pilot Performance Modeling and Validation Efforts

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher; Sebok, Angelia; Gore, Brian; Hooey, Becky

    2012-01-01

    We review 25 articles presenting 5 general classes of computational models to predict pilot error. This more targeted review is placed within the context of the broader review of computational models of pilot cognition and performance, including such aspects as models of situation awareness or pilot-automation interaction. Particular emphasis is placed on the degree of validation of such models against empirical pilot data, and the relevance of the modeling and validation efforts to Next Gen technology and procedures.

  16. Towards program theory validation: Crowdsourcing the qualitative analysis of participant experiences.

    PubMed

    Harman, Elena; Azzam, Tarek

    2018-02-01

    This exploratory study examines a novel tool for validating program theory through crowdsourced qualitative analysis. It combines a quantitative pattern matching framework traditionally used in theory-driven evaluation with crowdsourcing to analyze qualitative interview data. A sample of crowdsourced participants are asked to read an interview transcript and identify whether program theory components (Activities and Outcomes) are discussed and to highlight the most relevant passage about that component. The findings indicate that using crowdsourcing to analyze qualitative data can differentiate between program theory components that are supported by a participant's experience and those that are not. This approach expands the range of tools available to validate program theory using qualitative data, thus strengthening the theory-driven approach. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Assessing Cognitive Performance in Badminton Players: A Reproducibility and Validity Study

    PubMed Central

    van de Water, Tanja; Faber, Irene; Elferink-Gemser, Marije

    2017-01-01

    Abstract Fast reaction and good inhibitory control are associated with elite sports performance. To evaluate the reproducibility and validity of a newly developed Badminton Reaction Inhibition Test (BRIT), fifteen elite (25 ± 4 years) and nine non-elite (24 ± 4 years) Dutch male badminton players participated in the study. The BRIT measured four components: domain-general reaction time, badminton-specific reaction time, domain-general inhibitory control and badminton-specific inhibitory control. Five participants were retested within three weeks on the badminton-specific components. Reproducibility was acceptable for badminton-specific reaction time (ICC = 0.626, CV = 6%) and for badminton-specific inhibitory control (ICC = 0.317, CV = 13%). Good construct validity was shown for badminton-specific reaction time discriminating between elite and non-elite players (F = 6.650, p < 0.05). Elite players did not outscore non-elite players on domain-general reaction time nor on both components of inhibitory control (p > 0.05). Concurrent validity for domain-general reaction time was good, as it was associated with a national ranking for elite (p = 0.70, p < 0.01) and non-elite (p = 0.70, p < 0.05) players. No relationship was found between the national ranking and badminton-specific reaction time, nor both components of inhibitory control (p > 0.05). In conclusion, reproducibility and validity of inhibitory control assessment was not confirmed, however, the BRIT appears a reproducible and valid measure of reaction time in badminton players. Reaction time measured with the BRIT may provide input for training programs aiming to improve badminton players’ performance. PMID:28210347

  18. Assessing Cognitive Performance in Badminton Players: A Reproducibility and Validity Study.

    PubMed

    van de Water, Tanja; Huijgen, Barbara; Faber, Irene; Elferink-Gemser, Marije

    2017-01-01

    Fast reaction and good inhibitory control are associated with elite sports performance. To evaluate the reproducibility and validity of a newly developed Badminton Reaction Inhibition Test (BRIT), fifteen elite (25 ± 4 years) and nine non-elite (24 ± 4 years) Dutch male badminton players participated in the study. The BRIT measured four components: domain-general reaction time, badminton-specific reaction time, domain-general inhibitory control and badminton-specific inhibitory control. Five participants were retested within three weeks on the badminton-specific components. Reproducibility was acceptable for badminton-specific reaction time (ICC = 0.626, CV = 6%) and for badminton-specific inhibitory control (ICC = 0.317, CV = 13%). Good construct validity was shown for badminton-specific reaction time discriminating between elite and non-elite players (F = 6.650, p < 0.05). Elite players did not outscore non-elite players on domain-general reaction time nor on both components of inhibitory control (p > 0.05). Concurrent validity for domain-general reaction time was good, as it was associated with a national ranking for elite (p = 0.70, p < 0.01) and non-elite (p = 0.70, p < 0.05) players. No relationship was found between the national ranking and badminton-specific reaction time, nor both components of inhibitory control (p > 0.05). In conclusion, reproducibility and validity of inhibitory control assessment was not confirmed, however, the BRIT appears a reproducible and valid measure of reaction time in badminton players. Reaction time measured with the BRIT may provide input for training programs aiming to improve badminton players' performance.

  19. EVLncRNAs: a manually curated database for long non-coding RNAs validated by low-throughput experiments.

    PubMed

    Zhou, Bailing; Zhao, Huiying; Yu, Jiafeng; Guo, Chengang; Dou, Xianghua; Song, Feng; Hu, Guodong; Cao, Zanxia; Qu, Yuanxu; Yang, Yuedong; Zhou, Yaoqi; Wang, Jihua

    2018-01-04

    Long non-coding RNAs (lncRNAs) play important functional roles in various biological processes. Early databases were utilized to deposit all lncRNA candidates produced by high-throughput experimental and/or computational techniques to facilitate classification, assessment and validation. As more lncRNAs are validated by low-throughput experiments, several databases were established for experimentally validated lncRNAs. However, these databases are small in scale (with a few hundreds of lncRNAs only) and specific in their focuses (plants, diseases or interactions). Thus, it is highly desirable to have a comprehensive dataset for experimentally validated lncRNAs as a central repository for all of their structures, functions and phenotypes. Here, we established EVLncRNAs by curating lncRNAs validated by low-throughput experiments (up to 1 May 2016) and integrating specific databases (lncRNAdb, LncRANDisease, Lnc2Cancer and PLNIncRBase) with additional functional and disease-specific information not covered previously. The current version of EVLncRNAs contains 1543 lncRNAs from 77 species that is 2.9 times larger than the current largest database for experimentally validated lncRNAs. Seventy-four percent lncRNA entries are partially or completely new, comparing to all existing experimentally validated databases. The established database allows users to browse, search and download as well as to submit experimentally validated lncRNAs. The database is available at http://biophy.dzu.edu.cn/EVLncRNAs. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. EVLncRNAs: a manually curated database for long non-coding RNAs validated by low-throughput experiments

    PubMed Central

    Zhao, Huiying; Yu, Jiafeng; Guo, Chengang; Dou, Xianghua; Song, Feng; Hu, Guodong; Cao, Zanxia; Qu, Yuanxu

    2018-01-01

    Abstract Long non-coding RNAs (lncRNAs) play important functional roles in various biological processes. Early databases were utilized to deposit all lncRNA candidates produced by high-throughput experimental and/or computational techniques to facilitate classification, assessment and validation. As more lncRNAs are validated by low-throughput experiments, several databases were established for experimentally validated lncRNAs. However, these databases are small in scale (with a few hundreds of lncRNAs only) and specific in their focuses (plants, diseases or interactions). Thus, it is highly desirable to have a comprehensive dataset for experimentally validated lncRNAs as a central repository for all of their structures, functions and phenotypes. Here, we established EVLncRNAs by curating lncRNAs validated by low-throughput experiments (up to 1 May 2016) and integrating specific databases (lncRNAdb, LncRANDisease, Lnc2Cancer and PLNIncRBase) with additional functional and disease-specific information not covered previously. The current version of EVLncRNAs contains 1543 lncRNAs from 77 species that is 2.9 times larger than the current largest database for experimentally validated lncRNAs. Seventy-four percent lncRNA entries are partially or completely new, comparing to all existing experimentally validated databases. The established database allows users to browse, search and download as well as to submit experimentally validated lncRNAs. The database is available at http://biophy.dzu.edu.cn/EVLncRNAs. PMID:28985416

  1. Validation of NOViSE.

    PubMed

    Korzeniowski, Przemyslaw; Brown, Daniel C; Sodergren, Mikael H; Barrow, Alastair; Bello, Fernando

    2017-02-01

    The goal of this study was to establish face, content, and construct validity of NOViSE-the first force-feedback enabled virtual reality (VR) simulator for natural orifice transluminal endoscopic surgery (NOTES). Fourteen surgeons and surgical trainees performed 3 simulated hybrid transgastric cholecystectomies using a flexible endoscope on NOViSE. Four of them were classified as "NOTES experts" who had independently performed 10 or more simulated or human NOTES procedures. Seven participants were classified as "Novices" and 3 as "Gastroenterologists" with no or minimal NOTES experience. A standardized 5-point Likert-type scale questionnaire was administered to assess the face and content validity. NOViSE showed good overall face and content validity. In 14 out of 15 statements pertaining to face validity (graphical appearance, endoscope and tissue behavior, overall realism), ≥50% of responses were "agree" or "strongly agree." In terms of content validity, 85.7% of participants agreed or strongly agreed that NOViSE is a useful training tool for NOTES and 71.4% that they would recommend it to others. Construct validity was established by comparing a number of performance metrics such as task completion times, path lengths, applied forces, and so on. NOViSE demonstrated early signs of construct validity. Experts were faster and used a shorter endoscopic path length than novices in all but one task. The results indicate that NOViSE authentically recreates a transgastric hybrid cholecystectomy and sets promising foundations for the further development of a VR training curriculum for NOTES without compromising patient safety or requiring expensive animal facilities.

  2. The Stroop test as a measure of performance validity in adults clinically referred for neuropsychological assessment.

    PubMed

    Erdodi, Laszlo A; Sagar, Sanya; Seke, Kristian; Zuccato, Brandon G; Schwartz, Eben S; Roth, Robert M

    2018-06-01

    This study was designed to develop performance validity indicators embedded within the Delis-Kaplan Executive Function Systems (D-KEFS) version of the Stroop task. Archival data from a mixed clinical sample of 132 patients (50% male; M Age = 43.4; M Education = 14.1) clinically referred for neuropsychological assessment were analyzed. Criterion measures included the Warrington Recognition Memory Test-Words and 2 composites based on several independent validity indicators. An age-corrected scaled score ≤6 on any of the 4 trials reliably differentiated psychometrically defined credible and noncredible response sets with high specificity (.87-.94) and variable sensitivity (.34-.71). An inverted Stroop effect was less sensitive (.14-.29), but comparably specific (.85-90) to invalid performance. Aggregating the newly developed D-KEFS Stroop validity indicators further improved classification accuracy. Failing the validity cutoffs was unrelated to self-reported depression or anxiety. However, it was associated with elevated somatic symptom report. In addition to processing speed and executive function, the D-KEFS version of the Stroop task can function as a measure of performance validity. A multivariate approach to performance validity assessment is generally superior to univariate models. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. A Malay version of the Child Oral Impacts on Daily Performances (Child-OIDP) index: assessing validity and reliability.

    PubMed

    Yusof, Zamros Y M; Jaafar, Nasruddin

    2012-06-08

    The study aimed to develop and test a Malay version of the Child-OIDP index, evaluate its psychometric properties and report on the prevalence of oral impacts on eight daily performances in a sample of 11-12 year old Malaysian schoolchildren. The Child-OIDP index was translated from English into Malay. The Malay version was tested for reliability and validity on a non-random sample of 132, 11-12 year old schoolchildren from two urban schools in Kuala Lumpur. Psychometric analysis of the Malay Child-OIDP involved face, content, criterion and construct validity tests as well as internal and test-retest reliability. Non-parametric statistical methods were used to assess relationships between Child-OIDP scores and other subjective outcome measures. The standardised Cronbach's alpha was 0.80 and the weighted Kappa was 0.84 (intraclass correlation = 0.79). The index showed significant associations with different subjective measures viz. perceived satisfaction with mouth, perceived needs for dental treatment, perceived oral health status and toothache experience in the previous 3 months (p < 0.05). Two-thirds (66.7%) of the sample had oral impacts affecting one or more performances in the past 3 months. The three most frequently affected performances were cleaning teeth (36.4%), eating foods (34.8%) and maintaining emotional stability (26.5%). In terms of severity of impact, the ability to relax was most severely affected by their oral conditions, followed by ability to socialise and doing schoolwork. Almost three-quarters (74.2%) of schoolchildren with oral impacts had up to three performances affected by their oral conditions. This study indicated that the Malay Child-OIDP index is a valid and reliable instrument to measure the oral impacts of daily performances in 11-12 year old urban schoolchildren in Malaysia.

  4. Validation and Evaluation of Army Aviation Collective Performance Measures

    DTIC Science & Technology

    2014-01-01

    Research Report 1972 Validation and Evaluation of Army Aviation Collective Performance Measures Martin L. Bink U.S. Army...United States Army Research Institute for the Behavioral and Social Sciences Approved for public release; distribution is unlimited. U.S. Army...Research Institute for the Behavioral and Social Sciences Department of the Army Deputy Chief of Staff, G1 Authorized and approved for

  5. Predictive validity of pre-admission assessments on medical student performance.

    PubMed

    Dabaliz, Al-Awwab; Kaadan, Samy; Dabbagh, M Marwan; Barakat, Abdulaziz; Shareef, Mohammad Abrar; Al-Tannir, Mohamad; Obeidat, Akef; Mohamed, Ayman

    2017-11-24

    To examine the predictive validity of pre-admission variables on students' performance in a medical school in Saudi Arabia. In this retrospective study, we collected admission and college performance data for 737 students in preclinical and clinical years. Data included high school scores and other standardized test scores, such as those of the National Achievement Test and the General Aptitude Test. Additionally, we included the scores of the Test of English as a Foreign Language (TOEFL) and the International English Language Testing System (IELTS) exams. Those datasets were then compared with college performance indicators, namely the cumulative Grade Point Average (cGPA) and progress test, using multivariate linear regression analysis. In preclinical years, both the National Achievement Test (p=0.04, B=0.08) and TOEFL (p=0.017, B=0.01) scores were positive predictors of cGPA, whereas the General Aptitude Test (p=0.048, B=-0.05) negatively predicted cGPA. Moreover, none of the pre-admission variables were predictive of progress test performance in the same group. On the other hand, none of the pre-admission variables were predictive of cGPA in clinical years. Overall, cGPA strongly predict-ed students' progress test performance (p<0.001 and B=19.02). Only the National Achievement Test and TOEFL significantly predicted performance in preclinical years. However, these variables do not predict progress test performance, meaning that they do not predict the functional knowledge reflected in the progress test. We report various strengths and deficiencies in the current medical college admission criteria, and call for employing more sensitive and valid ones that predict student performance and functional knowledge, especially in the clinical years.

  6. Radiative transfer model validations during the First ISLSCP Field Experiment

    NASA Technical Reports Server (NTRS)

    Frouin, Robert; Breon, Francois-Marie; Gautier, Catherine

    1990-01-01

    Two simple radiative transfer models, the 5S model based on Tanre et al. (1985, 1986) and the wide-band model of Morcrette (1984) are validated by comparing their outputs with results obtained during the First ISLSCP Field Experiment on concomitant radiosonde, aerosol turbidity, and radiation measurements and sky photographs. Results showed that the 5S model overestimates the short-wave irradiance by 13.2 W/sq m, whereas the Morcrette model underestimated the long-wave irradiance by 7.4 W/sq m.

  7. Chemometric and biological validation of a capillary electrophoresis metabolomic experiment of Schistosoma mansoni infection in mice.

    PubMed

    Garcia-Perez, Isabel; Angulo, Santiago; Utzinger, Jürg; Holmes, Elaine; Legido-Quigley, Cristina; Barbas, Coral

    2010-07-01

    Metabonomic and metabolomic studies are increasingly utilized for biomarker identification in different fields, including biology of infection. The confluence of improved analytical platforms and the availability of powerful multivariate analysis software have rendered the multiparameter profiles generated by these omics platforms a user-friendly alternative to the established analysis methods where the quality and practice of a procedure is well defined. However, unlike traditional assays, validation methods for these new multivariate profiling tools have yet to be established. We propose a validation for models obtained by CE fingerprinting of urine from mice infected with the blood fluke Schistosoma mansoni. We have analysed urine samples from two sets of mice infected in an inter-laboratory experiment where different infection methods and animal husbandry procedures were employed in order to establish the core biological response to a S. mansoni infection. CE data were analysed using principal component analysis. Validation of the scores consisted of permutation scrambling (100 repetitions) and a manual validation method, using a third of the samples (not included in the model) as a test or prediction set. The validation yielded 100% specificity and 100% sensitivity, demonstrating the robustness of these models with respect to deciphering metabolic perturbations in the mouse due to a S. mansoni infection. A total of 20 metabolites across the two experiments were identified that significantly discriminated between S. mansoni-infected and noninfected control samples. Only one of these metabolites, allantoin, was identified as manifesting different behaviour in the two experiments. This study shows the reproducibility of CE-based metabolic profiling methods for disease characterization and screening and highlights the importance of much needed validation strategies in the emerging field of metabolomics.

  8. Validating the Assessment for Measuring Indonesian Secondary School Students Performance in Ecology

    NASA Astrophysics Data System (ADS)

    Rachmatullah, A.; Roshayanti, F.; Ha, M.

    2017-09-01

    The aims of this current study are validating the American Association for the Advancement of Science (AAAS) Ecology assessment and examining the performance of Indonesian secondary school students on the assessment. A total of 611 Indonesian secondary school students (218 middle school students and 393 high school students) participated in the study. Forty-five items of AAAS assessment in the topic of Interdependence in Ecosystems were divided into two versions which every version has 21 similar items. Linking item method was used as the method to combine those two versions of assessment and further Rasch analyses were utilized to validate the instrument. Independent sample t-test was also run to compare the performance of Indonesian students and American students based on the mean of item difficulty. We found that from the total of 45 items, three items were identified as misfitting items. Later on, we also found that both Indonesian middle and high school students were significantly lower performance with very large and medium effect size compared to American students. We will discuss our findings in the regard of validation issue and the connection to Indonesian student’s science literacy.

  9. Drag-Free Performance of the ST7 Disturbance Reduction System Flight Experiment on the LISA Pathfinder

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman; O'Donnell, James, Jr.; Hsu, Oscar; Ziemer, John; Dunn, Charles

    2017-01-01

    The Space Technology-7 Disturbance Reduction System (DRS) is an experiment package aboard the European Space Agency (ESA) LISA Pathfinder spacecraft. LISA Pathfinder launched from Kourou, French Guiana on December 3, 2015. The DRS is tasked to validate two specific technologies: colloidal micro-Newton thrusters (CMNT) to provide low-noise control capability of the spacecraft, and drag-free control flight. This validation is performed using highly sensitive drag-free sensors, which are provided by the LISA Technology Package of the European Space Agency. The Disturbance Reduction System is required to maintain the spacecrafts position with respect to a free-floating test mass to better than 10nmHz, along its sensitive axis (axis in optical metrology). It also has a goal of limiting the residual accelerations of any of the two test masses to below 30 (1 + [f3 mHz]) fmsHz, over the frequency range of 1 to 30 mHz.This paper briefly describes the design and the expected on-orbit performance of the control system for the two modes wherein the drag-free performance requirements are verified. The on-orbit performance of these modes are then compared to the requirements, as well as to the expected performance, and discussed.

  10. Face validity, construct validity and training benefits of a virtual reality TURP simulator.

    PubMed

    Bright, Elizabeth; Vine, Samuel; Wilson, Mark R; Masters, Rich S W; McGrath, John S

    2012-01-01

    To assess face validity, construct validity and the training benefits of a virtual reality TURP simulator. 11 novices (no TURP experience) and 7 experts (>200 TURP's) completed a virtual reality median lobe prostate resection task on the TURPsim™ (Simbionix USA Corp., Cleveland, OH). Performance indicators (percentage of prostate resected (PR), percentage of capsular resection (CR) and time diathermy loop active without tissue contact (TAWC) were recorded via the TURPsim™ and compared between novices and experts to assess construct validity. Verbal comments provided by experts following task completion were used to assess face validity. Repeated attempts of the task by the novices were analysed to assess the training benefits of the TURPsim™. Experts resected a significantly greater percentage of prostate per minute (p < 0.01) and had significantly less active diathermy time without tissue contact (p < 0.01) than novices. After practice, novices were able to perform the simulation more effectively, with significant improvement in all measured parameters. Improvement in performance was noted in novices following repetitive training, as evidenced by improved TAWC scores that were not significantly different from the expert group (p = 0.18). This study has established face and construct validity for the TURPsim™. The potential benefit in using this tool to train novices has also been demonstrated. Copyright © 2012 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  11. Performance of a cognitive load inventory during simulated handoffs: Evidence for validity.

    PubMed

    Young, John Q; Boscardin, Christy K; van Dijk, Savannah M; Abdullah, Ruqayyah; Irby, David M; Sewell, Justin L; Ten Cate, Olle; O'Sullivan, Patricia S

    2016-01-01

    Advancing patient safety during handoffs remains a public health priority. The application of cognitive load theory offers promise, but is currently limited by the inability to measure cognitive load types. To develop and collect validity evidence for a revised self-report inventory that measures cognitive load types during a handoff. Based on prior published work, input from experts in cognitive load theory and handoffs, and a think-aloud exercise with residents, a revised Cognitive Load Inventory for Handoffs was developed. The Cognitive Load Inventory for Handoffs has items for intrinsic, extraneous, and germane load. Students who were second- and sixth-year students recruited from a Dutch medical school participated in four simulated handoffs (two simple and two complex cases). At the end of each handoff, study participants completed the Cognitive Load Inventory for Handoffs, Paas' Cognitive Load Scale, and one global rating item for intrinsic load, extraneous load, and germane load, respectively. Factor and correlational analyses were performed to collect evidence for validity. Confirmatory factor analysis yielded a single factor that combined intrinsic and germane loads. The extraneous load items performed poorly and were removed from the model. The score from the combined intrinsic and germane load items associated, as predicted by cognitive load theory, with a commonly used measure of overall cognitive load (Pearson's r = 0.83, p < 0.001), case complexity (beta = 0.74, p < 0.001), level of experience (beta = -0.96, p < 0.001), and handoff accuracy (r = -0.34, p < 0.001). These results offer encouragement that intrinsic load during handoffs may be measured via a self-report measure. Additional work is required to develop an adequate measure of extraneous load.

  12. Image quality validation of Sentinel 2 Level-1 products: performance status at the beginning of the constellation routine phase

    NASA Astrophysics Data System (ADS)

    Francesconi, Benjamin; Neveu-VanMalle, Marion; Espesset, Aude; Alhammoud, Bahjat; Bouzinac, Catherine; Clerc, Sébastien; Gascon, Ferran

    2017-09-01

    Sentinel-2 is an Earth Observation mission developed by the European Space Agency (ESA) in the frame of the Copernicus program of the European Commission. The mission is based on a constellation of 2-satellites: Sentinel-2A launched in June 2015 and Sentinel-2B launched in March 2017. It offers an unprecedented combination of systematic global coverage of land and coastal areas, a high revisit of five days at the equator and 2 days at mid-latitudes under the same viewing conditions, high spatial resolution, and a wide field of view for multispectral observations from 13 bands in the visible, near infrared and short wave infrared range of the electromagnetic spectrum. The mission performances are routinely and closely monitored by the S2 Mission Performance Centre (MPC), including a consortium of Expert Support Laboratories (ESL). This publication focuses on the Sentinel-2 Level-1 product quality validation activities performed by the MPC. It presents an up-to-date status of the Level-1 mission performances at the beginning of the constellation routine phase. Level-1 performance validations routinely performed cover Level-1 Radiometric Validation (Equalisation Validation, Absolute Radiometry Vicarious Validation, Absolute Radiometry Cross-Mission Validation, Multi-temporal Relative Radiometry Vicarious Validation and SNR Validation), and Level-1 Geometric Validation (Geolocation Uncertainty Validation, Multi-spectral Registration Uncertainty Validation and Multi-temporal Registration Uncertainty Validation). Overall, the Sentinel-2 mission is proving very successful in terms of product quality thereby fulfilling the promises of the Copernicus program.

  13. Validation experiments to determine radiation partitioning of heat flux to an object in a fully turbulent fire.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricks, Allen; Blanchat, Thomas K.; Jernigan, Dann A.

    2006-06-01

    It is necessary to improve understanding and develop validation data of the heat flux incident to an object located within the fire plume for the validation of SIERRA/ FUEGO/SYRINX fire and SIERRA/CALORE. One key aspect of the validation data sets is the determination of the relative contribution of the radiative and convective heat fluxes. To meet this objective, a cylindrical calorimeter with sufficient instrumentation to measure total and radiative heat flux had been designed and fabricated. This calorimeter will be tested both in the controlled radiative environment of the Penlight facility and in a fire environment in the FLAME/Radiant Heatmore » (FRH) facility. Validation experiments are specifically designed for direct comparison with the computational predictions. Making meaningful comparisons between the computational and experimental results requires careful characterization and control of the experimental features or parameters used as inputs into the computational model. Validation experiments must be designed to capture the essential physical phenomena, including all relevant initial and boundary conditions. A significant question of interest to modeling heat flux incident to an object in or near a fire is the contribution of the radiation and convection modes of heat transfer. The series of experiments documented in this test plan is designed to provide data on the radiation partitioning, defined as the fraction of the total heat flux that is due to radiation.« less

  14. Validation of the Italian version of the dissociative experience scale for adolescents and young adults.

    PubMed

    De Pasquale, Concetta; Sciacca, Federica; Hichy, Zira

    2016-01-01

    The Dissociative Experience Scale for adolescent (A-DES), a 30-item, multidimensional, self-administered questionnaire, was validated using a large sample of American young people sample. We reported the linguistic validation process and the metric validity of the Italian version of A-DES in the Italy. A set of questionnaires was provided to a total of 633 participants from March 2015 to April 2016. The participants consisted of 282 boys and 351 girls, and their average age was between 18 and 24 years old. The translation process consisted of two consecutive steps: forward-backward translation and acceptability testing. The psychometric testing was applied to Italian students who were recruited from the Italian Public Schools and Universities in Sicily. Informed consent was obtained from all participants at the research. All individuals completed the A-DES. Reliability and validity were tested. The translated version was validated on a total of 633 Italian students. The reliability of A-DES total is .926. It is composed by 4 subscales: Dissociative amnesia, Absorption and imaginative involvement, Depersonalization and derealization, and Passive influence. The reliability of each subscale is: .756 for dissociative amnesia, .659 for absorption and imaginative involvement, .850 for depersonalization and derealization, and .743 for passive influence. The Italian version of the A-DES constitutes a useful instrument to measure dissociative experience in adolescents and young adults in Italy.

  15. Further examination of embedded performance validity indicators for the Conners' Continuous Performance Test and Brief Test of Attention in a large outpatient clinical sample.

    PubMed

    Sharland, Michael J; Waring, Stephen C; Johnson, Brian P; Taran, Allise M; Rusin, Travis A; Pattock, Andrew M; Palcher, Jeanette A

    2018-01-01

    Assessing test performance validity is a standard clinical practice and although studies have examined the utility of cognitive/memory measures, few have examined attention measures as indicators of performance validity beyond the Reliable Digit Span. The current study further investigates the classification probability of embedded Performance Validity Tests (PVTs) within the Brief Test of Attention (BTA) and the Conners' Continuous Performance Test (CPT-II), in a large clinical sample. This was a retrospective study of 615 patients consecutively referred for comprehensive outpatient neuropsychological evaluation. Non-credible performance was defined two ways: failure on one or more PVTs and failure on two or more PVTs. Classification probability of the BTA and CPT-II into non-credible groups was assessed. Sensitivity, specificity, positive predictive value, and negative predictive value were derived to identify clinically relevant cut-off scores. When using failure on two or more PVTs as the indicator for non-credible responding compared to failure on one or more PVTs, highest classification probability, or area under the curve (AUC), was achieved by the BTA (AUC = .87 vs. .79). CPT-II Omission, Commission, and Total Errors exhibited higher classification probability as well. Overall, these findings corroborate previous findings, extending them to a large clinical sample. BTA and CPT-II are useful embedded performance validity indicators within a clinical battery but should not be used in isolation without other performance validity indicators.

  16. Validation of the Land-Surface Energy Budget and Planetary Boundary Layer for Several Intensive field Experiments

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Schubert, Siegfried; Molod, Andrea; Houser, Paul R.

    1999-01-01

    Land-surface processes in a data assimilation system influence the lower troposphere and must be properly represented. With the recent incorporation of the Mosaic Land-surface Model (LSM) into the GEOS Data Assimilation System (DAS), the detailed land-surface processes require strict validation. While global data sources can identify large-scale systematic biases at the monthly timescale, the diurnal cycle is difficult to validate. Moreover, global data sets rarely include variables such as evaporation, sensible heat and soil water. Intensive field experiments, on the other hand, can provide high temporal resolution energy budget and vertical profile data for sufficiently long periods, without global coverage. Here, we evaluate the GEOS DAS against several intensive field experiments. The field experiments are First ISLSCP Field Experiment (FIFE, Kansas, summer 1987), Cabauw (as used in PILPS, Netherlands, summer 1987), Atmospheric Radiation Measurement (ARM, Southern Great Plains, winter and summer 1998) and the Surface Heat Budget of the Arctic Ocean (SHEBA, Arctic ice sheet, winter and summer 1998). The sites provide complete surface energy budget data for periods of at least one year, and some periods of vertical profiles. This comparison provides a detailed validation of the Mosaic LSM within the GEOS DAS for a variety of climatologic and geographic conditions.

  17. Style preference survey: a report on the psychometric properties and a cross-validation experiment.

    PubMed

    Smith, Sherri L; Ricketts, Todd; McArdle, Rachel A; Chisolm, Theresa H; Alexander, Genevieve; Bratt, Gene

    2013-02-01

    Several self-report measures exist that target different aspects of outcomes for hearing aid use. Currently, no comprehensive questionnaire specifically assesses factors that may be important for differentiating outcomes pertaining to hearing aid style. The goal of this work was to develop the Style Preference Survey (SPS), a questionnaire aimed at outcomes associated with hearing aid style differences. Two experiments were conducted. After initial item development, Experiment 1 was conducted to refine the items and to determine its psychometric properties. Experiment 2 was designed to cross-validate the findings from the initial experiment. An observational design was used in both experiments. Participants who wore traditional, custom-fitted (TC) or open-canal (OC) style hearing aids from 3 mo to 3 yr completed the initial experiment. One-hundred and eighty-four binaural hearing aid users (120 of whom wore TC hearing aids and 64 of whom wore OC hearing aids) participated. A new sample of TC and OC users (n = 185) participated in the cross-validation experiment. Currently available self-report measures were reviewed to identify items that might differentiate between hearing aid styles, particularly preference for OC versus TC hearing aid styles. A total of 15 items were selected and modified from available self-report measures. An additional 55 items were developed through consensus of six audiologists for the initial version of the SPS. In the first experiment, the initial SPS version was mailed to 550 veterans who met the inclusion criteria. A total of 184 completed the SPS. Approximately three weeks later, a subset of participants (n = 83) completed the SPS a second time. Basic analyses were conducted to evaluate the psychometric properties of the SPS including subscale structure, internal consistency, test-retest reliability, and responsiveness. Based on the results of Experiment 1, the SPS was revised. A cross-validation experiment was then conducted using the

  18. Geographic and temporal validity of prediction models: Different approaches were useful to examine model performance

    PubMed Central

    Austin, Peter C.; van Klaveren, David; Vergouwe, Yvonne; Nieboer, Daan; Lee, Douglas S.; Steyerberg, Ewout W.

    2017-01-01

    Objective Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting We illustrated different analytic methods for validation using a sample of 14,857 patients hospitalized with heart failure at 90 hospitals in two distinct time periods. Bootstrap resampling was used to assess internal validity. Meta-analytic methods were used to assess geographic transportability. Each hospital was used once as a validation sample, with the remaining hospitals used for model derivation. Hospital-specific estimates of discrimination (c-statistic) and calibration (calibration intercepts and slopes) were pooled using random effects meta-analysis methods. I2 statistics and prediction interval width quantified geographic transportability. Temporal transportability was assessed using patients from the earlier period for model derivation and patients from the later period for model validation. Results Estimates of reproducibility, pooled hospital-specific performance, and temporal transportability were on average very similar, with c-statistics of 0.75. Between-hospital variation was moderate according to I2 statistics and prediction intervals for c-statistics. Conclusion This study illustrates how performance of prediction models can be assessed in settings with multicenter data at different time periods. PMID:27262237

  19. Predictive validity of pre-admission assessments on medical student performance

    PubMed Central

    Dabaliz, Al-Awwab; Kaadan, Samy; Dabbagh, M. Marwan; Barakat, Abdulaziz; Shareef, Mohammad Abrar; Al-Tannir, Mohamad; Obeidat, Akef

    2017-01-01

    Objectives To examine the predictive validity of pre-admission variables on students’ performance in a medical school in Saudi Arabia.  Methods In this retrospective study, we collected admission and college performance data for 737 students in preclinical and clinical years. Data included high school scores and other standardized test scores, such as those of the National Achievement Test and the General Aptitude Test. Additionally, we included the scores of the Test of English as a Foreign Language (TOEFL) and the International English Language Testing System (IELTS) exams. Those datasets were then compared with college performance indicators, namely the cumulative Grade Point Average (cGPA) and progress test, using multivariate linear regression analysis. Results In preclinical years, both the National Achievement Test (p=0.04, B=0.08) and TOEFL (p=0.017, B=0.01) scores were positive predictors of cGPA, whereas the General Aptitude Test (p=0.048, B=-0.05) negatively predicted cGPA. Moreover, none of the pre-admission variables were predictive of progress test performance in the same group. On the other hand, none of the pre-admission variables were predictive of cGPA in clinical years. Overall, cGPA strongly predict-ed students’ progress test performance (p<0.001 and B=19.02). Conclusions Only the National Achievement Test and TOEFL significantly predicted performance in preclinical years. However, these variables do not predict progress test performance, meaning that they do not predict the functional knowledge reflected in the progress test. We report various strengths and deficiencies in the current medical college admission criteria, and call for employing more sensitive and valid ones that predict student performance and functional knowledge, especially in the clinical years. PMID:29176032

  20. Estimation of Uncertainties for a Supersonic Retro-Propulsion Model Validation Experiment in a Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Rhode, Matthew N.; Oberkampf, William L.

    2012-01-01

    A high-quality model validation experiment was performed in the NASA Langley Research Center Unitary Plan Wind Tunnel to assess the predictive accuracy of computational fluid dynamics (CFD) models for a blunt-body supersonic retro-propulsion configuration at Mach numbers from 2.4 to 4.6. Static and fluctuating surface pressure data were acquired on a 5-inch-diameter test article with a forebody composed of a spherically-blunted, 70-degree half-angle cone and a cylindrical aft body. One non-powered configuration with a smooth outer mold line was tested as well as three different powered, forward-firing nozzle configurations: a centerline nozzle, three nozzles equally spaced around the forebody, and a combination with all four nozzles. A key objective of the experiment was the determination of experimental uncertainties from a range of sources such as random measurement error, flowfield non-uniformity, and model/instrumentation asymmetries. This paper discusses the design of the experiment towards capturing these uncertainties for the baseline non-powered configuration, the methodology utilized in quantifying the various sources of uncertainty, and examples of the uncertainties applied to non-powered and powered experimental results. The analysis showed that flowfield nonuniformity was the dominant contributor to the overall uncertainty a finding in agreement with other experiments that have quantified various sources of uncertainty.

  1. Validity evidence for the Simulated Colonoscopy Objective Performance Evaluation scoring system.

    PubMed

    Trinca, Kristen D; Cox, Tiffany C; Pearl, Jonathan P; Ritter, E Matthew

    2014-02-01

    Low-cost, objective systems to assess and train endoscopy skills are needed. The aim of this study was to evaluate the ability of Simulated Colonoscopy Objective Performance Evaluation to assess the skills required to perform endoscopy. Thirty-eight subjects were included in this study, all of whom performed 4 tasks. The scoring system measured performance by calculating precision and efficiency. Data analysis assessed the relationship between colonoscopy experience and performance on each task and the overall score. Endoscopic trainees' Simulated Colonoscopy Objective Performance Evaluation scores correlated significantly with total colonoscopy experience (r = .61, P = .003) and experience in the past 12 months (r = .63, P = .002). Significant differences were seen among practicing endoscopists, nonendoscopic surgeons, and trainees (P < .0001). When the 4 tasks were analyzed, each showed significant correlation with colonoscopy experience (scope manipulation, r = .44, P = .044; tool targeting, r = .45, P = .04; loop management, r = .47, P = .032; mucosal inspection, r = .65, P = .001) and significant differences in performance between the endoscopist groups, except for mucosal inspection (scope manipulation, P < .0001; tool targeting, P = .002; loop management, P = .0008; mucosal inspection, P = .27). Simulated Colonoscopy Objective Performance Evaluation objectively assesses the technical skills required to perform endoscopy and shows promise as a platform for proficiency-based skills training. Published by Elsevier Inc.

  2. The Role of Structural Models in the Solar Sail Flight Validation Process

    NASA Technical Reports Server (NTRS)

    Johnston, John D.

    2004-01-01

    NASA is currently soliciting proposals via the New Millennium Program ST-9 opportunity for a potential Solar Sail Flight Validation (SSFV) experiment to develop and operate in space a deployable solar sail that can be steered and provides measurable acceleration. The approach planned for this experiment is to test and validate models and processes for solar sail design, fabrication, deployment, and flight. These models and processes would then be used to design, fabricate, and operate scaleable solar sails for future space science missions. There are six validation objectives planned for the ST9 SSFV experiment: 1) Validate solar sail design tools and fabrication methods; 2) Validate controlled deployment; 3) Validate in space structural characteristics (focus of poster); 4) Validate solar sail attitude control; 5) Validate solar sail thrust performance; 6) Characterize the sail's electromagnetic interaction with the space environment. This poster presents a top-level assessment of the role of structural models in the validation process for in-space structural characteristics.

  3. TRIMS: Validating T2 Molecular Effects for Neutrino Mass Experiments

    NASA Astrophysics Data System (ADS)

    Lin, Ying-Ting; Trims Collaboration

    2017-09-01

    The Tritium Recoil-Ion Mass Spectrometer (TRIMS) experiment examines the branching ratio of the molecular tritium (T2) beta decay to the bound state (3HeT+). Measuring this branching ratio helps to validate the current molecular final-state theory applied in neutrino mass experiments such as KATRIN and Project 8. TRIMS consists of a magnet-guided time-of-flight mass spectrometer with a detector located on each end. By measuring the kinetic energy and time-of-flight difference of the ions and beta particles reaching the detectors, we will be able to distinguish molecular ions from atomic ones and hence derive the ratio in question. We will give an update on the apparatus, simulation software, and analysis tools, including efforts to improve the resolution of our detectors and to characterize the stability and uniformity of our field sources. We will also share our commissioning results and prospects for physics data. The TRIMS experiment is supported by U.S. Department of Energy Office of Science, Office of Nuclear Physics, Award Number DE-FG02-97ER41020.

  4. Development and validation of the Measure of Indigenous Racism Experiences (MIRE)

    PubMed Central

    Paradies, Yin C; Cunningham, Joan

    2008-01-01

    Background In recent decades there has been increasing evidence of a relationship between self-reported racism and health. Although a plethora of instruments to measure racism have been developed, very few have been described conceptually or psychometrically Furthermore, this research field has been limited by a dearth of instruments that examine reactions/responses to racism and by a restricted focus on African American populations. Methods In response to these limitations, the 31-item Measure of Indigenous Racism Experiences (MIRE) was developed to assess self-reported racism for Indigenous Australians. This paper describes the development of the MIRE together with an opportunistic examination of its content, construct and convergent validity in a population health study involving 312 Indigenous Australians. Results Focus group research supported the content validity of the MIRE, and inter-item/scale correlations suggested good construct validity. A good fit with a priori conceptual dimensions was demonstrated in factor analysis, and convergence with a separate item on discrimination was satisfactory. Conclusion The MIRE has considerable utility as an instrument that can assess multiple facets of racism together with responses/reactions to racism among indigenous populations and, potentially, among other ethnic/racial groups. PMID:18426602

  5. Electrolysis Performance Improvement Concept Study (EPICS) Flight Experiment-Reflight

    NASA Technical Reports Server (NTRS)

    Schubert, F. H.

    1997-01-01

    The Electrolysis Performance Improvement Concept Study (EPICS) is a flight experiment to demonstrate and validate in a microgravity environment the Static Feed Electrolyzer (SFE) concept which was selected for the use aboard the International Space Station (ISS) for oxygen (O2) generation. It also is to investigate the impact of microgravity on electrochemical cell performance. Electrochemical cells are important to the space program because they provide an efficient means of generating O2 and hydrogen (H2) in space. Oxygen and H2 are essential not only for the survival of humans in space but also for the efficient and economical operation of various space systems. Electrochemical cells can reduce the mass, volume and logistical penalties associated with resupply and storage by generating and/or consuming these gases in space. An initial flight of the EPICS was conducted aboard STS-69 from September 7 to 8, 1995. A temperature sensor characteristics shift and a missing line of software code resulted in only partial success of this initial flight. Based on the review and recommendations of a National Aeronautics and Space Administration (NASA) Johnson Space Center (JSC) review team a reflight activity was initiated to obtain the remaining desired results, not achieved during the initial flight.

  6. Validity of Adult Retrospective Reports of Adverse Childhood Experiences: Review of the Evidence

    ERIC Educational Resources Information Center

    Hardt, Jochen; Rutter, Michael

    2004-01-01

    Background: Influential studies have cast doubt on the validity of retrospective reports by adults of their own adverse experiences in childhood. Accordingly, many researchers view retrospective reports with scepticism. Method: A computer-based search, supplemented by hand searches, was used to identify studies reported between 1980 and 2001 in…

  7. Development and Validation of a Scale Assessing Mental Health Clinicians' Experiences of Associative Stigma.

    PubMed

    Yanos, Philip T; Vayshenker, Beth; DeLuca, Joseph S; O'Connor, Lauren K

    2017-10-01

    Mental health professionals who work with people with serious mental illnesses are believed to experience associative stigma. Evidence suggests that associative stigma could play an important role in the erosion of empathy among professionals; however, no validated measure of the construct currently exists. This study examined the convergent and discriminant validity and factor structure of a new scale assessing the associative stigma experiences of clinicians working with people with serious mental illnesses. A total of 473 clinicians were recruited from professional associations in the United States and participated in an online study. Participants completed the Clinician Associative Stigma Scale (CASS) and measures of burnout, quality of care, expectations about recovery, and self-efficacy. Associative stigma experiences were commonly endorsed; eight items on the 18-item scale were endorsed as being experienced "sometimes" or "often" by over 50% of the sample. The new measure demonstrated a logical four-factor structure: "negative stereotypes about professional effectiveness," "discomfort with disclosure," "negative stereotypes about people with mental illness," and "stereotypes about professionals' mental health." The measure had good internal consistency. It was significantly related to measures of burnout and quality of care, but it was not related to measures of self-efficacy or expectations about recovery. Findings suggest that the CASS is internally consistent and shows evidence of convergent validity and that associative stigma is commonly experienced by mental health professionals who work with people with serious mental illnesses.

  8. Development and validation of the FertiMed questionnaire assessing patients' experiences with hormonal fertility medication.

    PubMed

    Lankreijer, K; D'Hooghe, T; Sermeus, W; van Asseldonk, F P M; Repping, S; Dancet, E A F

    2016-08-01

    Can a valid and reliable questionnaire be developed to assess patients' experiences with all of the characteristics of hormonal fertility medication valued by them? The FertiMed questionnaire is a valid and reliable tool that assesses patients' experiences with all medication characteristics valued by them and that can be used for all hormonal fertility medications, irrespective of their route of administration. Hormonal fertility medications cause emotional strain and differ in their dosage regime and route of administration, although they often have comparable effectiveness. Medication experiences of former patients would be informative for medication choices. A recent literature review showed that there is no trustworthy tool to compare patients' experiences of medications with differing routes of administration, regarding all medication characteristics which patients value. The items of the new FertiMed questionnaire were generated by literature review and 23 patient interviews. In 2013, 411 IVF-patients were asked to retrospectively complete the FertiMed questionnaire to assess 1 out of the 8 different medications used for ovarian stimulation, induction of pituitary quiescence, ovulation triggering or luteal support. In total, 276 patients (on average 35 per medication) from 2 university fertility clinics (Belgium, the Netherlands) completed the FertiMed questionnaire (67% response rate). The FertiMed questionnaire questioned whether items were valued by patients and whether these items were experienced while using the assessed medication. Hence, the final outcome 'Experiences with Valued Aspects Scores' (EVAS) combined importance and experience ratings. The content and face validity, reliability, feasibility and discriminative potential of the FertiMed questionnaire were tested and changes were made accordingly. Patient interviews defined 51 items relevant to seven medication characteristics previously proved to be important to patients. Item analysis deleted

  9. Validity and reliability of a novel measure of activity performance and participation.

    PubMed

    Murgatroyd, Phil; Karimi, Leila

    2016-01-01

    To develop and evaluate an innovative clinician-rated measure, which produces global numerical ratings of activity performance and participation. Repeated measures study with 48 community-dwelling participants investigating clinical sensibility, comprehensiveness, practicality, inter-rater reliability, responsiveness, sensitivity and concurrent validity with Barthel Index. Important clinimetric characteristics including comprehensiveness and ease of use were rated >8/10 by clinicians. Inter-rater reliability was excellent on the summary scores (intraclass correlation of 0.95-0.98). There was good evidence that the new outcome measure distinguished between known high and low functional scoring groups, including both responsiveness to change and sensitivity at the same time point in numerous tests. Concurrent validity with the Barthel Index was fair to high (Spearman Rank Order Correlation 0.32-0.85, p > 0.05). The new measure's summary scores were nearly twice as responsive to change compared with the Barthel Index. Other more detailed data could also be generated by the new measure. The Activity Performance Measure is an innovative outcome instrument that showed good clinimetric qualities in this initial study. Some of the results were strong, given the sample size, and further trial and evaluation is appropriate. Implications for Rehabilitation The Activity Performance Measure is an innovative outcome measure covering activity performance and participation. In an initial evaluation, it showed good clinimetric qualities including responsiveness to change, sensitivity, practicality, clinical sensibility, item coverage, inter-rater reliability and concurrent validity with the Barthel Index. Further trial and evaluation is appropriate.

  10. Directed Design of Experiments for Validating Probability of Detection Capability of a Testing System

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R. (Inventor)

    2012-01-01

    A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.

  11. A Rasch scaling validation of a 'core' near-death experience.

    PubMed

    Lange, Rense; Greyson, Bruce; Houran, James

    2004-05-01

    For those with true near-death experiences (NDEs), Greyson's (1983, 1990) NDE Scale satisfactorily fits the Rasch rating scale model, thus yielding a unidimensional measure with interval-level scaling properties. With increasing intensity, NDEs reflect peace, joy and harmony, followed by insight and mystical or religious experiences, while the most intense NDEs involve an awareness of things occurring in a different place or time. The semantics of this variable are invariant across True-NDErs' gender, current age, age at time of NDE, and latency and intensity of the NDE, thus identifying NDEs as 'core' experiences whose meaning is unaffected by external variables, regardless of variations in NDEs' intensity. Significant qualitative and quantitative differences were observed between True-NDErs and other respondent groups, mostly revolving around the differential emphasis on paranormal/mystical/religious experiences vs. standard reactions to threat. The findings further suggest that False-Positive respondents reinterpret other profound psychological states as NDEs. Accordingly, the Rasch validation of the typology proposed by Greyson (1983) also provides new insights into previous research, including the possibility of embellishment over time (as indicated by the finding of positive, as well as negative, latency effects) and the potential roles of religious affiliation and religiosity (as indicated by the qualitative differences surrounding paranormal/mystical/religious issues).

  12. Development and Validation of the Appearance and Performance Enhancing Drug Use Schedule

    PubMed Central

    Langenbucher, James W.; Lai, Justine Karmin; Loeb, Katharine L.; Hollander, Eric

    2011-01-01

    Appearance-and-performance enhancing drug (APED) use is a form of drug use that includes use of a wide range of substances such as anabolic-androgenic steroids (AASs) and associated behaviors including intense exercise and dietary control. To date, there are no reliable or valid measures of the core features of APED use. The present study describes the development and psychometric evaluation of the Appearance and Performance Enhancing Drug Use Schedule (APEDUS) which is a semi-structured interview designed to assess the spectrum of drug use and related features of APED use. Eighty-five current APED using men and women (having used an illicit APED in the past year and planning to use an illicit APED in the future) completed the APEDUS and measures of convergent and divergent validity. Inter-rater agreement, scale reliability, one-week test-retest reliability, convergent and divergent validity, and construct validity were evaluated for each of the APEDUS scales. The APEDUS is a modular interview with 10 sections designed to assess the core drug and non-drug phenomena associated with APED use. All scales and individual items demonstrated high inter-rater agreement and reliability. Individual scales significantly correlated with convergent measures (DSM-IV diagnoses, aggression, impulsivity, eating disorder pathology) and were uncorrelated with a measure of social desirability. APEDUS subscale scores were also accurate measures of AAS dependence. The APEDUS is a reliable and valid measure of APED phenomena and an accurate measure of the core pathology associated with APED use. Issues with assessing APED use are considered and future research considered. PMID:21640487

  13. Drag-Free Performance of the ST7 Disturbance Reduction System Flight Experiment on the LISA Pathfinder

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; O'Donnell, James R.; Hsu, Oscar H.; Ziemer, John K.; Dunn, Charles E.

    2017-01-01

    The Space Technology-7 Disturbance Reduction System (DRS) is an experiment package aboard the European Space Agency (ESA) LISA Pathfinder spacecraft. LISA Pathfinder launched from Kourou, French Guiana on December 3, 2015. The DRS is tasked to validate two specific technologies: colloidal micro-Newton thrusters (CMNT) to provide low-noise control capability of the spacecraft, and drag-free controlflight. This validation is performed using highly sensitive drag-free sensors, which are provided by the LISA Technology Package of the European Space Agency. The Disturbance Reduction System is required to maintain the spacecrafts position with respect to a free-floating test mass to better than 10nm/(square root of Hz), along its sensitive axis (axis in optical metrology). It also has a goal of limiting the residual accelerations of any of the two test masses to below 30 x 10(exp -14) (1 + ([f/3 mHz](exp 2))) m/sq s/(square root of Hz), over the frequency range of 1 to 30 mHz.This paper briefly describes the design and the expected on-orbit performance of the control system for the two modes wherein the drag-free performance requirements are verified. The on-orbit performance of these modes are then compared to the requirements, as well as to the expected performance, and discussed.

  14. Design of experiments in medical physics: Application to the AAA beam model validation.

    PubMed

    Dufreneix, S; Legrand, C; Di Bartolo, C; Bremaud, M; Mesgouez, J; Tiplica, T; Autret, D

    2017-09-01

    The purpose of this study is to evaluate the usefulness of the design of experiments in the analysis of multiparametric problems related to the quality assurance in radiotherapy. The main motivation is to use this statistical method to optimize the quality assurance processes in the validation of beam models. Considering the Varian Eclipse system, eight parameters with several levels were selected: energy, MLC, depth, X, Y 1 and Y 2 jaw dimensions, wedge and wedge jaw. A Taguchi table was used to define 72 validation tests. Measurements were conducted in water using a CC04 on a TrueBeam STx, a TrueBeam Tx, a Trilogy and a 2300IX accelerator matched by the vendor. Dose was computed using the AAA algorithm. The same raw data was used for all accelerators during the beam modelling. The mean difference between computed and measured doses was 0.1±0.5% for all beams and all accelerators with a maximum difference of 2.4% (under the 3% tolerance level). For all beams, the measured doses were within 0.6% for all accelerators. The energy was found to be an influencing parameter but the deviations observed were smaller than 1% and not considered clinically significant. Designs of experiment can help define the optimal measurement set to validate a beam model. The proposed method can be used to identify the prognostic factors of dose accuracy. The beam models were validated for the 4 accelerators which were found dosimetrically equivalent even though the accelerator characteristics differ. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  15. Validation and Scaling of Soil Moisture in a Semi-Arid Environment: SMAP Validation Experiment 2015 (SMAPVEX15)

    NASA Technical Reports Server (NTRS)

    Colliander, Andreas; Cosh, Michael H.; Misra, Sidharth; Jackson, Thomas J.; Crow, Wade T.; Chan, Steven; Bindlish, Rajat; Chae, Chun; Holifield Collins, Chandra; Yueh, Simon H.

    2017-01-01

    The NASA SMAP (Soil Moisture Active Passive) mission conducted the SMAP Validation Experiment 2015 (SMAPVEX15) in order to support the calibration and validation activities of SMAP soil moisture data products. The main goals of the experiment were to address issues regarding the spatial disaggregation methodologies for improvement of soil moisture products and validation of the in situ measurement upscaling techniques. To support these objectives high-resolution soil moisture maps were acquired with the airborne PALS (Passive Active L-band Sensor) instrument over an area in southeast Arizona that includes the Walnut Gulch Experimental Watershed (WGEW), and intensive ground sampling was carried out to augment the permanent in situ instrumentation. The objective of the paper was to establish the correspondence and relationship between the highly heterogeneous spatial distribution of soil moisture on the ground and the coarse resolution radiometer-based soil moisture retrievals of SMAP. The high-resolution mapping conducted with PALS provided the required connection between the in situ measurements and SMAP retrievals. The in situ measurements were used to validate the PALS soil moisture acquired at 1-km resolution. Based on the information from a dense network of rain gauges in the study area, the in situ soil moisture measurements did not capture all the precipitation events accurately. That is, the PALS and SMAP soil moisture estimates responded to precipitation events detected by rain gauges, which were in some cases not detected by the in situ soil moisture sensors. It was also concluded that the spatial distribution of the soil moisture resulted from the relatively small spatial extents of the typical convective storms in this region was not completely captured with the in situ stations. After removing those cases (approximately10 of the observations) the following metrics were obtained: RMSD (root mean square difference) of0.016m3m3 and correlation of 0.83. The

  16. An ecologically valid performance-based social functioning assessment battery for schizophrenia.

    PubMed

    Shi, Chuan; He, Yi; Cheung, Eric F C; Yu, Xin; Chan, Raymond C K

    2013-12-30

    Psychiatrists pay more attention to the social functioning outcome of schizophrenia nowadays. How to evaluate the real world function among schizophrenia is a challenging task due to culture difference, there is no such kind of instrument in terms of the Chinese setting. This study aimed to report the validation of an ecologically valid performance-based everyday functioning assessment for schizophrenia, namely the Beijing Performance-based Functional Ecological Test (BJ-PERFECT). Fifty community-dwelling adults with schizophrenia and 37 healthy controls were recruited. Fifteen of the healthy controls were re-tested one week later. All participants were administered the University of California, San Diego, Performance-based Skill Assessment-Brief version (UPSA-B) and the MATRICS Consensus Cognitive Battery (MCCB). The finalized assessment included three subdomains: transportation, financial management and work ability. The test-retest and inter-rater reliabilities were good. The total score significantly correlated with the UPSA-B. The performance of individuals with schizophrenia was significantly more impaired than healthy controls, especially in the domain of work ability. Among individuals with schizophrenia, functional outcome was influenced by premorbid functioning, negative symptoms and neurocognition such as processing speed, visual learning and attention/vigilance. © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Development and initial validation of the assessment of caregiver experience with neuromuscular disease.

    PubMed

    Matsumoto, Hiroko; Clayton-Krasinski, Debora A; Klinge, Stephen A; Gomez, Jaime A; Booker, Whitney A; Hyman, Joshua E; Roye, David P; Vitale, Michael G

    2011-01-01

    Orthopaedic intervention can have a wide range of functional and psychosocial effects on children with neuromuscular disease (NMD). In the multihandicapped child (Gross Motor Classification System IV/V), functional status, pain, psychosocial function, and health-related quality of life also have effects on the families of these child. The purpose of this study is to report the development and initial validation of an outcomes instrument specifically designed to assess the caregiver impact experienced by parents raising severely affected NMD children: the Assessment of Caregiver Experience with Neuromuscular Disease (ACEND). In the first part of this prospective study, 61 children with NMD and their parents were administered a range of earlier validated pediatric health measures. A framework technique was used to select the most appropriate and relevant subset of questions from this large set. Sensitivity analyses guided the development of a master question list measuring caregiver impact, excluding items with low relevance, and modifying unclear questions. In the second part of the study, the ACEND was administered to the caregivers of 46 children with moderate-to-severe NMD. Statistical analyses were conducted to determine validity of the instrument. The resulting ACEND instrument included 2 domains, 7 subdomains, and 41 items. Domain 1, examining physical impact, includes 4 subdomains: feeding/grooming/dressing (6 items), sitting/play (5 items), transfers (5 items), and mobility (7 items). Domain 2, which examines general caregiver impact, included 3 subdomains: time (4 items), emotion (9 items), and finance (5 items). Mean overall relevance rating was 6.21 ± 0.37 and clarity rating was 6.68 ± 0.52 (scale 0 to 7). Multiple floor effects in patients with GMFCS V and ceiling effects in patients with GMFCS III were identified almost exclusively in motor-based items. Virtually no floor or ceiling effects were identified in the time, emotion or finance domains

  18. [Validation of a Japanese version of the Experience in Close Relationship- Relationship Structure].

    PubMed

    Komura, Kentaro; Murakami, Tatsuya; Toda, Koji

    2016-08-01

    The purpose of this study was to translate the Experience of Close Relationship-Relationship Structure (ECRRS) and evaluate its validity. In study 1 (N = 982), evidence based internal structure (factor structure, internal consistency, and correlation among sub-scales) and evidence based relations to other variables (depression, reassurance seeking and self-esteem) were confirmed. In study 2 (N = 563), evidence based on internal structure was reconfirmed, and evidence based relations to other variables (IWMS, RQ, and ECR-GO) were confirmed. In study 3 (N = 342), evidence based internal structure (test-retest reliability) was confirmed. Based on these results, we concluded that ECR-RS was valid for measuring adult attachment style.

  19. Libet's experiment: Questioning the validity of measuring the urge to move.

    PubMed

    Dominik, Tomáš; Dostál, Daniel; Zielina, Martin; Šmahaj, Jan; Sedláčková, Zuzana; Procházka, Roman

    2017-03-01

    The time of subjectively registered urge to move (W) constituted the central point of most Libet-style experiments. It is therefore crucial to verify the W validity. Our experiment was based on the assumption that the W time is inferred, rather than introspectively perceived. We used the rotating spot method to gather the W reports together with the reports of the subjective timing of actual movement (M). The subjects were assigned the tasks in two different orders. When measured as first in the respective session, no significant difference between W and M values was found, which suggests that uninformed subjects tend to confuse W for M reports. Moreover, we found that W values measured after the M task were significantly earlier than W values measured before M. This phenomenon suggests that the apparent difference between W and M values is in fact caused by the subjects' previous experience with M measurements. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. SCALE TSUNAMI Analysis of Critical Experiments for Validation of 233U Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Don; Rearden, Bradley T

    2009-01-01

    Oak Ridge National Laboratory (ORNL) staff used the SCALE TSUNAMI tools to provide a demonstration evaluation of critical experiments considered for use in validation of current and anticipated operations involving {sup 233}U at the Radiochemical Development Facility (RDF). This work was reported in ORNL/TM-2008/196 issued in January 2009. This paper presents the analysis of two representative safety analysis models provided by RDF staff.

  1. Validating Performance Level Descriptors (PLDs) for the AP® Environmental Science Exam

    ERIC Educational Resources Information Center

    Reshetar, Rosemary; Kaliski, Pamela; Chajewski, Michael; Lionberger, Karen

    2012-01-01

    This presentation summarizes a pilot study conducted after the May 2011 administration of the AP Environmental Science Exam. The study used analytical methods based on scaled anchoring as input to a Performance Level Descriptor validation process that solicited systematic input from subject matter experts.

  2. Validation and clinical utility of the executive function performance test in persons with traumatic brain injury.

    PubMed

    Baum, C M; Wolf, T J; Wong, A W K; Chen, C H; Walker, K; Young, A C; Carlozzi, N E; Tulsky, D S; Heaton, R K; Heinemann, A W

    2017-07-01

    This study examined the relationships between the Executive Function Performance Test (EFPT), the NIH Toolbox Cognitive Function tests, and neuropsychological executive function measures in 182 persons with traumatic brain injury (TBI) and 46 controls to evaluate construct, discriminant, and predictive validity. Construct validity: There were moderate correlations between the EFPT and the NIH Toolbox Crystallized (r = -.479), Fluid Tests (r = -.420), and Total Composite Scores (r = -.496). Discriminant validity: Significant differences were found in the EFPT total and sequence scores across control, complicated mild/moderate, and severe TBI groups. We found differences in the organisation score between control and severe, and between mild and severe TBI groups. Both TBI groups had significantly lower scores in safety and judgement than controls. Compared to the controls, the severe TBI group demonstrated significantly lower performance on all instrumental activities of daily living (IADL) tasks. Compared to the mild TBI group, the controls performed better on the medication task, the severe TBI group performed worse in the cooking and telephone tasks. Predictive validity: The EFPT predicted the self-perception of independence measured by the TBI-QOL (beta = -0.49, p < .001) for the severe TBI group. Overall, these data support the validity of the EFPT for use in individuals with TBI.

  3. Validation of a wireless modular monitoring system for structures

    NASA Astrophysics Data System (ADS)

    Lynch, Jerome P.; Law, Kincho H.; Kiremidjian, Anne S.; Carryer, John E.; Kenny, Thomas W.; Partridge, Aaron; Sundararajan, Arvind

    2002-06-01

    A wireless sensing unit for use in a Wireless Modular Monitoring System (WiMMS) has been designed and constructed. Drawing upon advanced technological developments in the areas of wireless communications, low-power microprocessors and micro-electro mechanical system (MEMS) sensing transducers, the wireless sensing unit represents a high-performance yet low-cost solution to monitoring the short-term and long-term performance of structures. A sophisticated reduced instruction set computer (RISC) microcontroller is placed at the core of the unit to accommodate on-board computations, measurement filtering and data interrogation algorithms. The functionality of the wireless sensing unit is validated through various experiments involving multiple sensing transducers interfaced to the sensing unit. In particular, MEMS-based accelerometers are used as the primary sensing transducer in this study's validation experiments. A five degree of freedom scaled test structure mounted upon a shaking table is employed for system validation.

  4. Five Years of JOSIE: Assessment of the Performance of Ozone Sondes Under Quasi-Flight Conditions in the Environmental Simulation Chamber With Regard to Satellite Validation

    NASA Astrophysics Data System (ADS)

    Smit, H. G.; Straeter, W.; Helten, M.; Kley, D.

    2002-05-01

    Up to an altitude of about 20 km ozone sondes constitute the most important data source with long term data coverage for the derivation of ozone trends with sufficient vertical resolution, particularly in the important altitude region around the tropopause. In this region and also above in lower/middle stratosphere up to 30-35 km altitude ozone sondes are of crucial importance to validate and evaluate satellite measurements, particularly for their long term stability. Each ozone sounding is made with an individual disposable instrument and, therefore, have to be characterized well prior to flight. Therefore, quality assurance of ozone sonde performance is a pre-requisite. As part of the quality assurance (QA) plan for ozone sondes that are in routine use in the Global Atmosphere Watch program of the World Meteorological Organization the environmental simulation chamber at the Research Centre Juelich (Germany) is established as World Calibration Centre for Ozone Sondes. The facility enables control of pressure, temperature and ozone concentration and can simulate flight conditions of ozone soundings up to an altitude of 35 km, whereby an accurate UV-photometer serves as a reference. In the scope of this QA-plan for ozonesondes since 1996 several JOSIE (= Juelich Ozone Sonde Intercomparison Experiment) experiments to assess the performance of ozone sondes of different types and manufacturers have been conducted at the calibration facility. We will present an overview of the results obtained from the different JOSIE experiments. The results will be discussed with regard to the use of ozone sondes to validate satellite measurements. Special attention will be paid to the influence of operating procedures on the performance of sondes and the need for standardization to assure ozone sounding data of sufficient quality to use for satellite validations.

  5. The Experiences in Close Relationship Scale (ECR)-short form: reliability, validity, and factor structure.

    PubMed

    Wei, Meifen; Russell, Daniel W; Mallinckrodt, Brent; Vogel, David L

    2007-04-01

    We developed a 12-item, short form of the Experiences in Close Relationship Scale (ECR; Brennan, Clark, & Shaver, 1998) across 6 studies. In Study 1, we examined the reliability and factor structure of the measure. In Studies 2 and 3, we cross-validated the reliability, factor structure, and validity of the short form measure; whereas in Study 4, we examined test-retest reliability over a 1-month period. In Studies 5 and 6, we further assessed the reliability, factor structure, and validity of the short version of the ECR when administered as a stand-alone instrument. Confirmatory factor analyses indicated that 2 factors, labeled Anxiety and Avoidance, provided a good fit to the data after removing the influence of response sets. We found validity to be equivalent for the short and the original versions of the ECR across studies. Finally, the results were comparable when we embedded the short form within the original version of the ECR and when we administered it as a stand-alone measure.

  6. Skylab experiment performance evaluation manual. Appendix F: Experiment M551 Metals melting (MSFC)

    NASA Technical Reports Server (NTRS)

    Byers, M. S.

    1973-01-01

    Analyses for Experiment M551 Metals Melting (MSFC), to be used for evaluating the performance of the Skylab corollary experiments under preflight, inflight, and post-flight conditions are presented. Experiment contingency plan workaround procedure and malfunction analyses are presented in order to assist in making the experiment operationally successful.

  7. Skylab experiment performance evaluation manual. Appendix H: Experiment M553 sphere forming (MSFC)

    NASA Technical Reports Server (NTRS)

    Thomas, O. H., Jr.

    1973-01-01

    Analyses for Experiment M553 Sphere Forming (MSFC), to be used for evaluating the performance of the Skylab corollary experiments under preflight, inflight, and post-flight conditions are presented. Experiment contingency plan workaround procedure and malfunction analyses are presented in order to assist in making the experiment operationally successful.

  8. An Examination and Validation of an Adapted Youth Experience Scale for University Sport

    ERIC Educational Resources Information Center

    Rathwell, Scott; Young, Bradley W.

    2016-01-01

    Limited tools assess positive development through university sport. Such a tool was validated in this investigation using two independent samples of Canadian university athletes. In Study 1, 605 athletes completed 99 survey items drawn from the Youth Experience Scale (YES 2.0), and separate a priori measurement models were evaluated (i.e., 99…

  9. The Space Technology-7 Disturbance Reduction System Precision Control Flight Validation Experiment Control System Design

    NASA Technical Reports Server (NTRS)

    O'Donnell, James R.; Hsu, Oscar C.; Maghami, Peirman G.; Markley, F. Landis

    2006-01-01

    As originally proposed, the Space Technology-7 Disturbance Reduction System (DRS) project, managed out of the Jet Propulsion Laboratory, was designed to validate technologies required for future missions such as the Laser Interferometer Space Antenna (LISA). The two technologies to be demonstrated by DRS were Gravitational Reference Sensors (GRSs) and Colloidal MicroNewton Thrusters (CMNTs). Control algorithms being designed by the Dynamic Control System (DCS) team at the Goddard Space Flight Center would control the spacecraft so that it flew about a freely-floating GRS test mass, keeping it centered within its housing. For programmatic reasons, the GRSs were descoped from DRS. The primary goals of the new mission are to validate the performance of the CMNTs and to demonstrate precise spacecraft position control. DRS will fly as a part of the European Space Agency (ESA) LISA Pathfinder (LPF) spacecraft along with a similar ESA experiment, the LISA Technology Package (LTP). With no GRS, the DCS attitude and drag-free control systems make use of the sensor being developed by ESA as a part of the LTP. The control system is designed to maintain the spacecraft s position with respect to the test mass, to within 10 nm/the square root of Hz over the DRS science frequency band of 1 to 30 mHz.

  10. The Outpatient Experience Questionnaire of comprehensive public hospital in China: development, validity and reliability.

    PubMed

    Hu, Yinhuan; Zhang, Zixia; Xie, Jinzhu; Wang, Guanping

    2017-02-01

    The objective of this study is to describe the development of the Outpatient Experience Questionnaire (OPEQ) and to assess the validity and reliability of the scale. Literature review, patient interviews, Delphi method and Cross-sectional validation survey. Six comprehensive public hospitals in China. The survey was carried out on a sample of 600 outpatients. Acceptability of the questionnaire was assessed according to the overall response rate, item non-response rate and the average completion time. Correlation coefficients and confirmatory factor analysis were used to test construct validity. Delphi method was used to assess the content validity of the questionnaire. Cronbach's coefficient alpha and split-half reliability coefficient were used to estimate the internal reliability of the questionnaire. The overall response rate was 97.2% and the item non-response rate ranged from 0% to 0.3%. The mean completion time was 6 min. The Spearman correlations of item-total score ranged from 0.466 to 0.765. The results of confirmatory factor analysis showed that all items had factor loadings above 0.40 and the dimension intercorrelation ranged from 0.449 to 0.773, the goodness of fit of the questionnaire was reasonable. The overall authority grade of expert consultation was 0.80 and Kendall's coefficient of concordance W was 0.186. The Cronbach's coefficients alpha of six dimensions ranged from 0.708 to 0.895, the split-half reliability coefficient (Spearman-Brown coefficient) was 0.969. The OPEQ is a promising instrument covering the most important aspects which influence outpatient experiences of comprehensive public hospital in China. It has good evidence for acceptability, validity and reliability. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  11. A novel cell culture model as a tool for forensic biology experiments and validations.

    PubMed

    Feine, Ilan; Shpitzen, Moshe; Roth, Jonathan; Gafny, Ron

    2016-09-01

    To improve and advance DNA forensic casework investigation outcomes, extensive field and laboratory experiments are carried out in a broad range of relevant branches, such as touch and trace DNA, secondary DNA transfer and contamination confinement. Moreover, the development of new forensic tools, for example new sampling appliances, by commercial companies requires ongoing validation and assessment by forensic scientists. A frequent challenge in these kinds of experiments and validations is the lack of a stable, reproducible and flexible biological reference material. As a possible solution, we present here a cell culture model based on skin-derived human dermal fibroblasts. Cultured cells were harvested, quantified and dried on glass slides. These slides were used in adhesive tape-lifting experiments and tests of DNA crossover confinement by UV irradiation. The use of this model enabled a simple and concise comparison between four adhesive tapes, as well as a straightforward demonstration of the effect of UV irradiation intensities on DNA quantity and degradation. In conclusion, we believe this model has great potential to serve as an efficient research tool in forensic biology. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Reference Proteome Extracts for Mass Spec Instrument Performance Validation and Method Development

    PubMed Central

    Rosenblatt, Mike; Urh, Marjeta; Saveliev, Sergei

    2014-01-01

    Biological samples of high complexity are required to test protein mass spec sample preparation procedures and validate mass spec instrument performance. Total cell protein extracts provide the needed sample complexity. However, to be compatible with mass spec applications, such extracts should meet a number of design requirements: compatibility with LC/MS (free of detergents, etc.)high protein integrity (minimal level of protein degradation and non-biological PTMs)compatibility with common sample preparation methods such as proteolysis, PTM enrichment and mass-tag labelingLot-to-lot reproducibility Here we describe total protein extracts from yeast and human cells that meet the above criteria. Two extract formats have been developed: Intact protein extracts with primary use for sample preparation method development and optimizationPre-digested extracts (peptides) with primary use for instrument validation and performance monitoring

  13. The prone bridge test: Performance, validity, and reliability among older and younger adults.

    PubMed

    Bohannon, Richard W; Steffl, Michal; Glenney, Susan S; Green, Michelle; Cashwell, Leah; Prajerova, Kveta; Bunn, Jennifer

    2018-04-01

    The prone bridge maneuver, or plank, has been viewed as a potential alternative to curl-ups for assessing trunk muscle performance. The purpose of this study was to assess prone bridge test performance, validity, and reliability among younger and older adults. Sixty younger (20-35 years old) and 60 older (60-79 years old) participants completed this study. Groups were evenly divided by sex. Participants completed surveys regarding physical activity and abdominal exercise participation. Height, weight, body mass index (BMI), and waist circumference were measured. On two occasions, 5-9 days apart, participants held a prone bridge until volitional exhaustion or until repeated technique failure. Validity was examined using data from the first session: convergent validity by calculating correlations between survey responses, anthropometrics, and prone bridge time, known groups validity by using an ANOVA comparing bridge times of younger and older adults and of men and women. Test-retest reliability was examined by using a paired t-test to compare prone bridge times for Session1 and Session 2. Furthermore, an intraclass correlation coefficient (ICC) was used to characterize relative reliability and minimal detectable change (MDC 95% ) was used to describe absolute reliability. The mean prone bridge time was 145.3 ± 71.5 s, and was positively correlated with physical activity participation (p ≤ 0.001) and negatively correlated with BMI and waist circumference (p ≤ 0.003). Younger participants had significantly longer plank times than older participants (p = 0.003). The ICC between testing sessions was 0.915. The prone bridge test is a valid and reliable measure for evaluating abdominal performance in both younger and older adults. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Skylab experiment performance evaluation manual. Appendix K: Experiment S009 nuclear emulsion (MSFC)

    NASA Technical Reports Server (NTRS)

    Meyers, J. E.

    1972-01-01

    A series of analyses are presented for Experiment S009, nuclear emulsion (MSFC), to be used for evaluating the performance of the Skylab corollary experiments under preflight, inflight, and postflight conditions. Experiment contingency plan workaround procedure and malfunction analyses are included in order to assist in making the experiment operationally successful.

  15. Predictive Validity of National Basketball Association Draft Combine on Future Performance.

    PubMed

    Teramoto, Masaru; Cross, Chad L; Rieger, Randall H; Maak, Travis G; Willick, Stuart E

    2018-02-01

    Teramoto, M, Cross, CL, Rieger, RH, Maak, TG, and Willick, SE. Predictive validity of national basketball association draft combine on future performance. J Strength Cond Res 32(2): 396-408, 2018-The National Basketball Association (NBA) Draft Combine is an annual event where prospective players are evaluated in terms of their athletic abilities and basketball skills. Data collected at the Combine should help NBA teams select right the players for the upcoming NBA draft; however, its value for predicting future performance of players has not been examined. This study investigated predictive validity of the NBA Draft Combine on future performance of basketball players. We performed a principal component analysis (PCA) on the 2010-2015 Combine data to reduce correlated variables (N = 234), a correlation analysis on the Combine data and future on-court performance to examine relationships (maximum pairwise N = 217), and a robust principal component regression (PCR) analysis to predict first-year and 3-year on-court performance from the Combine measures (N = 148 and 127, respectively). Three components were identified within the Combine data through PCA (= Combine subscales): length-size, power-quickness, and upper-body strength. As per the correlation analysis, the individual Combine items for anthropometrics, including height without shoes, standing reach, weight, wingspan, and hand length, as well as the Combine subscale of length-size, had positive, medium-to-large-sized correlations (r = 0.313-0.545) with defensive performance quantified by Defensive Box Plus/Minus. The robust PCR analysis showed that the Combine subscale of length-size was a predictor most significantly associated with future on-court performance (p ≤ 0.05), including Win Shares, Box Plus/Minus, and Value Over Replacement Player, followed by upper-body strength. In conclusion, the NBA Draft Combine has value for predicting future performance of players.

  16. Validating a Geographical Image Retrieval System.

    ERIC Educational Resources Information Center

    Zhu, Bin; Chen, Hsinchun

    2000-01-01

    Summarizes a prototype geographical image retrieval system that demonstrates how to integrate image processing and information analysis techniques to support large-scale content-based image retrieval. Describes an experiment to validate the performance of this image retrieval system against that of human subjects by examining similarity analysis…

  17. Performance validity testing in neuropsychology: a clinical guide, critical review, and update on a rapidly evolving literature.

    PubMed

    Lippa, Sara M

    2018-04-01

    Over the past two decades, there has been much research on measures of response bias and myriad measures have been validated in a variety of clinical and research samples. This critical review aims to guide clinicians through the use of performance validity tests (PVTs) from test selection and administration through test interpretation and feedback. Recommended cutoffs and relevant test operating characteristics are presented. Other important issues to consider during test selection, administration, interpretation, and feedback are discussed including order effects, coaching, impact on test data, and methods to combine measures and improve predictive power. When interpreting performance validity measures, neuropsychologists must use particular caution in cases of dementia, low intelligence, English as a second language/minority cultures, or low education. PVTs provide valuable information regarding response bias and, under the right circumstances, can provide excellent evidence of response bias. Only after consideration of the entire clinical picture, including validity test performance, can concrete determinations regarding the validity of test data be made.

  18. Real-time remote scientific model validation

    NASA Technical Reports Server (NTRS)

    Frainier, Richard; Groleau, Nicolas

    1994-01-01

    This paper describes flight results from the use of a CLIPS-based validation facility to compare analyzed data from a space life sciences (SLS) experiment to an investigator's preflight model. The comparison, performed in real-time, either confirms or refutes the model and its predictions. This result then becomes the basis for continuing or modifying the investigator's experiment protocol. Typically, neither the astronaut crew in Spacelab nor the ground-based investigator team are able to react to their experiment data in real time. This facility, part of a larger science advisor system called Principal Investigator in a Box, was flown on the space shuttle in October, 1993. The software system aided the conduct of a human vestibular physiology experiment and was able to outperform humans in the tasks of data integrity assurance, data analysis, and scientific model validation. Of twelve preflight hypotheses associated with investigator's model, seven were confirmed and five were rejected or compromised.

  19. Experiments on Competence and Performance.

    ERIC Educational Resources Information Center

    Ladefoged, Peter; Fromkin, V.A.

    1968-01-01

    The paper discusses some important distinctions between linguistic competence and linguistic performance. It is the authors' contention that the distinction between the two must be maintained in experimental linguistics, or else inadequate models result. Three experiments are described. In the first, subjects pronounce nonsense words and the…

  20. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context

    PubMed Central

    Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan

    2012-01-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720

  1. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    PubMed

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  2. Reactivity loss validation of high burn-up PWR fuels with pile-oscillation experiments in MINERVE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leconte, P.; Vaglio-Gaudard, C.; Eschbach, R.

    2012-07-01

    The ALIX experimental program relies on the experimental validation of the spent fuel inventory, by chemical analysis of samples irradiated in a PWR between 5 and 7 cycles, and also on the experimental validation of the spent fuel reactivity loss with bum-up, obtained by pile-oscillation measurements in the MINERVE reactor. These latter experiments provide an overall validation of both the fuel inventory and of the nuclear data responsible for the reactivity loss. This program offers also unique experimental data for fuels with a burn-up reaching 85 GWd/t, as spent fuels in French PWRs never exceeds 70 GWd/t up to now.more » The analysis of these experiments is done in two steps with the APOLLO2/SHEM-MOC/CEA2005v4 package. In the first one, the fuel inventory of each sample is obtained by assembly calculations. The calculation route consists in the self-shielding of cross sections on the 281 energy group SHEM mesh, followed by the flux calculation by the Method Of Characteristics in a 2D-exact heterogeneous geometry of the assembly, and finally a depletion calculation by an iterative resolution of the Bateman equations. In the second step, the fuel inventory is used in the analysis of pile-oscillation experiments in which the reactivity of the ALIX spent fuel samples is compared to the reactivity of fresh fuel samples. The comparison between Experiment and Calculation shows satisfactory results with the JEFF3.1.1 library which predicts the reactivity loss within 2% for burn-up of {approx}75 GWd/t and within 4% for burn-up of {approx}85 GWd/t. (authors)« less

  3. Thermal control surfaces experiment flight system performance

    NASA Technical Reports Server (NTRS)

    Wilkes, Donald R.; Hummer, Leigh L.; Zwiener, James M.

    1991-01-01

    The Thermal Control Surfaces Experiment (TCSE) is the most complex system, other than the LDEF, retrieved after long term space exposure. The TCSE is a microcosm of complex electro-optical payloads being developed and flow by NASA and the DoD including SDI. The objective of TCSE was to determine the effects of the near-Earth orbital environment and the LDEF induced environment on spacecraft thermal control surfaces. The TCSE was a comprehensive experiment that combined in-space measurements with extensive post flight analyses of thermal control surfaces to determine the effects of exposure to the low earth orbit space environment. The TCSE was the first space experiment to measure the optical properties of thermal control surfaces the way they are routinely measured in a lab. The performance of the TCSE confirms that low cost, complex experiment packages can be developed that perform well in space.

  4. Calibration of Predictor Models Using Multiple Validation Experiments

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.

  5. A Framework for Performing Verification and Validation in Reuse Based Software Engineering

    NASA Technical Reports Server (NTRS)

    Addy, Edward A.

    1997-01-01

    Verification and Validation (V&V) is currently performed during application development for many systems, especially safety-critical and mission- critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. The system application provides the context under which the software artifacts are validated. This paper describes a framework that extends V&V from an individual application system to a product line of systems that are developed within an architecture-based software engineering environment. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.

  6. Analysis procedures and subjective flight results of a simulator validation and cue fidelity experiment

    NASA Technical Reports Server (NTRS)

    Carr, Peter C.; Mckissick, Burnell T.

    1988-01-01

    A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.

  7. The feasibility and concurrent validity of performing the Movement Assessment Battery for Children - 2nd Edition via telerehabilitation technology.

    PubMed

    Nicola, Kristy; Waugh, Jemimah; Charles, Emily; Russell, Trevor

    2018-06-01

    In rural and remote communities children with motor difficulties have less access to rehabilitation services. Telerehabilitation technology is a potential method to overcome barriers restricting access to healthcare in these areas. Assessment is necessary to guide clinical reasoning; however it is unclear which paediatric assessments can be administered remotely. The Movement Assessment Battery for Children - 2nd Edition is commonly used by various health professionals to assess motor performance of children. The aim of this study was to investigate the feasibility and concurrent validity of performing the Movement Assessment Battery for Children - 2nd Edition remotely via telerehabilitation technology compared to the conventional in-person method. Fifty-nine children enrolled in a state school (5-11 years old) volunteered to perform one in-person and one telerehabilitation mediated assessment. The order of the method of delivery and the therapist performing the assessment were randomized. After both assessments were complete, a participant satisfaction questionnaire was completed by each child. The Bland-Altman limits of agreement for the total test standard score were -3.15 to 3.22 which is smaller than a pre-determined clinically acceptable margin based on the smallest detectable change. This study establishes the feasibility and concurrent validity of the administration of the Movement Assessment Battery for Children - 2nd Edition via telerehabilitation technology. Overall, participants perceived their experience with telerehabilitation positively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. CFD Validation Experiment of a Mach 2.5 Axisymmetric Shock-Wave/Boundary-Layer Interaction

    NASA Technical Reports Server (NTRS)

    Davis, David O.

    2015-01-01

    Experimental investigations of specific flow phenomena, e.g., Shock Wave Boundary-Layer Interactions (SWBLI), provide great insight to the flow behavior but often lack the necessary details to be useful as CFD validation experiments. Reasons include: 1.Undefined boundary conditions Inconsistent results 2.Undocumented 3D effects (CL only measurements) 3.Lack of uncertainty analysis While there are a number of good subsonic experimental investigations that are sufficiently documented to be considered test cases for CFD and turbulence model validation, the number of supersonic and hypersonic cases is much less. This was highlighted by Settles and Dodsons [1] comprehensive review of available supersonic and hypersonic experimental studies. In all, several hundred studies were considered for their database.Of these, over a hundred were subjected to rigorous acceptance criteria. Based on their criteria, only 19 (12 supersonic, 7 hypersonic) were considered of sufficient quality to be used for validation purposes. Aeschliman and Oberkampf [2] recognized the need to develop a specific methodology for experimental studies intended specifically for validation purposes.

  9. Validity and reliability of CHOICE Health Experience Questionnaire: Thai version.

    PubMed

    Aiyasanon, Nipa; Premasathian, Nalinee; Nimmannit, Akarin; Jetanavanich, Pantip; Sritippayawan, Suchai

    2009-09-01

    Assess the reliability and validity of the Thai translation of the CHOICE Health Experience Questionnaire (CHEQ), which is the English-language questionnaire, developed specifically for End-stage-renal disease (ESRD) patients. The CHEQ comprised of two parts, nine general domains of SF-36 (physical function, role-physical, bodily pain, mental health, role-emotional, social function, vitality, general health, and report transition) and 16 dialysis specific domains of the CHEQ (role-physical, mental health, general health, freedom, travel restriction, cognitive function, financial function, restriction diet and fluids, recreation, work, body image, symptoms, sex, sleep, access, and quality of life). The authors translated the CHEQ questionnaire into Thai and confirmed the accuracy by back translation. Pilot study sample was 10 Thai ESRD patients. Then the CHEQ (Thai) was applied to 110 Thai ESRD patients. Twenty-three patients had chronic peritoneal dialysis patients and 87 were chronic intermittent hemodialysis patients. Statistical analysis included descriptive statistics, Mann-Whitney U test, Student's t-test, and Cronbach's alpha. Construct validity was satisfactory with the significant difference less than 0.001 between the low and high group. The reliability coefficient for the Cronbach's alpha of the total scale of the CHEQ (Thai) was 0.98. The Cronbach 's alphas were greater than 0.7 for all domains, range from 0.58 to 0.92, except the social function and quality of life domain (alpha = 0.66 and 0.575). The CHEQ (Thai) is reliable and valid for assessment of Thai ESRD patients receiving chronic dialysis. Its properties are similar to those reported in the original version.

  10. Psychological and interactional characteristics of patients with somatoform disorders: Validation of the Somatic Symptoms Experiences Questionnaire (SSEQ) in a clinical psychosomatic population.

    PubMed

    Herzog, Annabel; Voigt, Katharina; Meyer, Björn; Wollburg, Eileen; Weinmann, Nina; Langs, Gernot; Löwe, Bernd

    2015-06-01

    The new DSM-5 Somatic Symptom Disorder (SSD) emphasizes the importance of psychological processes related to somatic symptoms in patients with somatoform disorders. To address this, the Somatic Symptoms Experiences Questionnaire (SSEQ), the first self-report scale that assesses a broad range of psychological and interactional characteristics relevant to patients with a somatoform disorder or SSD, was developed. This prospective study was conducted to validate the SSEQ. The 15-item SSEQ was administered along with a battery of self-report questionnaires to psychosomatic inpatients. Patients were assessed with the Structured Clinical Interview for DSM-IV to confirm a somatoform, depressive, or anxiety disorder. Confirmatory factor analyses, tests of internal consistency and tests of validity were performed. Patients (n=262) with a mean age of 43.4 years, 60.3% women, were included in the analyses. The previously observed four-factor model was replicated and internal consistency was good (Cronbach's α=.90). Patients with a somatoform disorder had significantly higher scores on the SSEQ (t=4.24, p<.001) than patients with a depressive/anxiety disorder. Construct validity was shown by high correlations with other instruments measuring related constructs. Hierarchical multiple regression analyses showed that the questionnaire predicted health-related quality of life. Sensitivity to change was shown by significantly higher effect sizes of the SSEQ change scores for improved patients than for patients without improvement. The SSEQ appears to be a reliable, valid, and efficient instrument to assess a broad range of psychological and interactional features related to the experience of somatic symptoms. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Validation of Cross Sections with Criticality Experiment and Reaction Rates: the Neptunium Case

    NASA Astrophysics Data System (ADS)

    Leong, L. S.; Tassan-Got, L.; Audouin, L.; Berthier, B.; Le Naour, C.; Stéphan, C.; Paradela, C.; Tarrío, D.; Duran, I.

    2014-04-01

    The 237Np neutron-induced fission cross section has been recently measured in a large energy range (from eV to GeV) at the n_TOF facility at CERN. When compared to previous measurements the n_TOF fission cross section appears to be higher by 5-7% beyond the fission threshold. To check the relevance of the n_TOF data, we considered a criticality experiment performed at Los Alamos with a 6 kg sphere of 237Np, surrounded by uranium highly enriched in 235U so as to approach criticality with fast neutrons. The multiplication factor keff of the calculation is in better agreement with the experiment when we replace the ENDF/B-VII.0 evaluation of the 237Np fission cross section by the n_TOF data. We also explored the hypothesis of deficiencies of the inelastic cross section in 235U which has been invoked by some authors to explain the deviation of 750 pcm. The large modification needed to reduce the deviation seems to be incompatible with existing inelastic cross section measurements. Also we show that the νbar of 237Np can hardly be incriminated because of the high accuracy of the existing data. Fission rate ratios or averaged fission cross sections measured in several fast neutron fields seem to give contradictory results on the validation of the 237Np cross section but at least one of the benchmark experiments, where the active deposits have been well calibrated for the number of atoms, favors the n_TOF data set. These outcomes support the hypothesis of a higher fission cross section of 237Np.

  12. Three-dimensional fuel pin model validation by prediction of hydrogen distribution in cladding and comparison with experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aly, A.; Avramova, Maria; Ivanov, Kostadin

    To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed bymore » data from hydrogen experiments and PIE data.« less

  13. Base Flow Model Validation

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John

    2011-01-01

    A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.

  14. Validity of clinical color vision tests for air traffic control specialists.

    DOT National Transportation Integrated Search

    1992-10-01

    An experiment on the relationship between aeromedical color vision screening test performance and performance on color-dependent tasks of Air Traffic Control Specialists was replicated to expand the data base supporting the job-related validity of th...

  15. Validation of spatial variability in downscaling results from the VALUE perfect predictor experiment

    NASA Astrophysics Data System (ADS)

    Widmann, Martin; Bedia, Joaquin; Gutiérrez, Jose Manuel; Maraun, Douglas; Huth, Radan; Fischer, Andreas; Keller, Denise; Hertig, Elke; Vrac, Mathieu; Wibig, Joanna; Pagé, Christian; Cardoso, Rita M.; Soares, Pedro MM; Bosshard, Thomas; Casado, Maria Jesus; Ramos, Petra

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. Within VALUE a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods has been developed. In the first validation experiment the downscaling methods are validated in a setup with perfect predictors taken from the ERA-interim reanalysis for the period 1997 - 2008. This allows to investigate the isolated skill of downscaling methods without further error contributions from the large-scale predictors. One aspect of the validation is the representation of spatial variability. As part of the VALUE validation we have compared various properties of the spatial variability of downscaled daily temperature and precipitation with the corresponding properties in observations. We have used two test validation datasets, one European-wide set of 86 stations, and one higher-density network of 50 stations in Germany. Here we present results based on three approaches, namely the analysis of i.) correlation matrices, ii.) pairwise joint threshold exceedances, and iii.) regions of similar variability. We summarise the information contained in correlation matrices by calculating the dependence of the correlations on distance and deriving decorrelation lengths, as well as by determining the independent degrees of freedom. Probabilities for joint threshold exceedances and (where appropriate) non-exceedances are calculated for various user-relevant thresholds related for instance to extreme precipitation or frost and heat days. The dependence of these probabilities on distance is again characterised by calculating typical length scales that separate dependent from independent exceedances. Regionalisation is based on rotated Principal Component Analysis. The results indicate which downscaling methods are preferable if the dependency of variability at different locations is relevant for the user.

  16. A Unified Model of Performance: Validation of its Predictions across Different Sleep/Wake Schedules

    PubMed Central

    Ramakrishnan, Sridhar; Wesensten, Nancy J.; Balkin, Thomas J.; Reifman, Jaques

    2016-01-01

    Study Objectives: Historically, mathematical models of human neurobehavioral performance developed on data from one sleep study were limited to predicting performance in similar studies, restricting their practical utility. We recently developed a unified model of performance (UMP) to predict the effects of the continuum of sleep loss—from chronic sleep restriction (CSR) to total sleep deprivation (TSD) challenges—and validated it using data from two studies of one laboratory. Here, we significantly extended this effort by validating the UMP predictions across a wide range of sleep/wake schedules from different studies and laboratories. Methods: We developed the UMP on psychomotor vigilance task (PVT) lapse data from one study encompassing four different CSR conditions (7 d of 3, 5, 7, and 9 h of sleep/night), and predicted performance in five other studies (from four laboratories), including different combinations of TSD (40 to 88 h), CSR (2 to 6 h of sleep/night), control (8 to 10 h of sleep/night), and nap (nocturnal and diurnal) schedules. Results: The UMP accurately predicted PVT performance trends across 14 different sleep/wake conditions, yielding average prediction errors between 7% and 36%, with the predictions lying within 2 standard errors of the measured data 87% of the time. In addition, the UMP accurately predicted performance impairment (average error of 15%) for schedules (TSD and naps) not used in model development. Conclusions: The unified model of performance can be used as a tool to help design sleep/wake schedules to optimize the extent and duration of neurobehavioral performance and to accelerate recovery after sleep loss. Citation: Ramakrishnan S, Wesensten NJ, Balkin TJ, Reifman J. A unified model of performance: validation of its predictions across different sleep/wake schedules. SLEEP 2016;39(1):249–262. PMID:26518594

  17. A novel cuffless device for self-measurement of blood pressure: concept, performance and clinical validation.

    PubMed

    Boubouchairopoulou, N; Kollias, A; Chiu, B; Chen, B; Lagou, S; Anestis, P; Stergiou, G S

    2017-07-01

    A pocket-size cuffless electronic device for self-measurement of blood pressure (BP) has been developed (Freescan, Maisense Inc., Zhubei, Taiwan). The device estimates BP within 10 s using three embedded electrodes and one force sensor that is applied over the radial pulse to evaluate the pulse wave. Before use, basic anthropometric characteristics are recorded on the device, and individualized initial calibration is required based on a standard BP measurement performed using an upper-arm BP monitor. The device performance in providing valid BP readings was evaluated in 313 normotensive and hypertensive adults in three study phases during which the device sensor was upgraded. A formal validation study of a prototype device against mercury sphygmomanometer was performed according to the American National Standards Institute/Association for the Advancement of Medical Instrumentation/International Organization for Standardization (ANSI/AAMI/ISO) 2013 protocol. The test device succeeded in obtaining a valid BP measurement (three successful readings within up to five attempts) in 55-72% of the participants, which reached 87% with device sensor upgrade. For the validation study, 125 adults were recruited and 85 met the protocol requirements for inclusion. The mean device-observers BP difference was 3.2±6.7 (s.d.) mm Hg for systolic and 2.6±4.6 mm Hg for diastolic BP (criterion 1). The estimated s.d. (inter-subject variability) were 5.83 and 4.17 mm Hg respectively (criterion 2). These data suggest that this prototype cuffless BP monitor provides valid self-measurements in the vast majority of adults, and satisfies the BP measurement accuracy criteria of the ANSI/AAMI/ISO 2013 validation protocol.

  18. An Investigation into the Choral Singer's Experience of Music Performance Anxiety

    ERIC Educational Resources Information Center

    Ryan, Charlene; Andrews, Nicholle

    2009-01-01

    The purpose of this study was to examine the performance experiences of choral singers with respect to music performance anxiety. Members of seven semiprofessional choirs (N = 201) completed questionnaires pertaining to their experience of performance anxiety in the context of their performance history, their experience with conductors, and their…

  19. Three-dimensional localized coherent structures of surface turbulence: Model validation with experiments and further computations.

    PubMed

    Demekhin, E A; Kalaidin, E N; Kalliadasis, S; Vlaskin, S Yu

    2010-09-01

    We validate experimentally the Kapitsa-Shkadov model utilized in the theoretical studies by Demekhin [Phys. Fluids 19, 114103 (2007)10.1063/1.2793148; Phys. Fluids 19, 114104 (2007)]10.1063/1.2793149 of surface turbulence on a thin liquid film flowing down a vertical planar wall. For water at 15° , surface turbulence typically occurs at an inlet Reynolds number of ≃40 . Of particular interest is to assess experimentally the predictions of the model for three-dimensional nonlinear localized coherent structures, which represent elementary processes of surface turbulence. For this purpose we devise simple experiments to investigate the instabilities and transitions leading to such structures. Our experimental results are in good agreement with the theoretical predictions of the model. We also perform time-dependent computations for the formation of coherent structures and their interaction with localized structures of smaller amplitude on the surface of the film.

  20. (In)validation in the Minority: The Experiences of Latino Students Enrolled in an HBCU

    ERIC Educational Resources Information Center

    Allen, Taryn Ozuna

    2016-01-01

    This qualitative, phenomenological study examined the academic and interpersonal validation experiences of four female and four male Latino students who were enrolled in their second- to fifth-year at an HBCU in Texas. Using interviews, campus observations, a questionnaire, and analytic memos, this study sought to understand the role of in- and…

  1. Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali

    Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errorsmore » of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.« less

  2. Specificity rates for non-clinical, bilingual, Mexican Americans on three popular performance validity measures.

    PubMed

    Gasquoine, Philip G; Weimer, Amy A; Amador, Arnoldo

    2017-04-01

    To measure specificity as failure rates for non-clinical, bilingual, Mexican Americans on three popular performance validity measures: (a) the language format Reliable Digit Span; (b) visual-perceptual format Test of Memory Malingering; and (c) visual-perceptual format Dot Counting, using optimal/suboptimal effort cut scores developed for monolingual, English-speakers. Participants were 61 consecutive referrals, aged between 18 and 65 years, with <16 years of education who were subjectively bilingual (confirmed via formal assessment) and chose the language of assessment, Spanish or English, for the performance validity tests. Failure rates were 38% for Reliable Digit Span, 3% for the Test of Memory Malingering, and 7% for Dot Counting. For Reliable Digit Span, the failure rates for Spanish (46%) and English (31%) languages of administration did not differ significantly. Optimal/suboptimal effort cut scores derived for monolingual English-speakers can be used with Spanish/English bilinguals when using the visual-perceptual format Test of Memory Malingering and Dot Counting. The high failure rate for Reliable Digit Span suggests it should not be used as a performance validity measure with Spanish/English bilinguals, irrespective of the language of test administration, Spanish or English.

  3. Validation of the Karolinska sleepiness scale against performance and EEG variables.

    PubMed

    Kaida, Kosuke; Takahashi, Masaya; Akerstedt, Torbjörn; Nakata, Akinori; Otsuka, Yasumasa; Haratani, Takashi; Fukasawa, Kenji

    2006-07-01

    The Karolinska sleepiness scale (KSS) is frequently used for evaluating subjective sleepiness. The main aim of the present study was to investigate the validity and reliability of the KSS with electroencephalographic, behavioral and other subjective indicators of sleepiness. Participants were 16 healthy females aged 33-43 (38.1+/-2.68) years. The experiment involved 8 measurement sessions per day for 3 consecutive days. Each session contained the psychomotor vigilance task (PVT), the Karolinska drowsiness test (KDT-EEG alpha & theta power), the alpha attenuation test (AAT-alpha power ratio open/closed eyes) and the KSS. Median reaction time, number of lapses, alpha and theta power density and the alpha attenuation coefficients (AAC) showed highly significant increase with increasing KSS. The same variables were also significantly correlated with KSS, with a mean value for lapses (r=0.56). The KSS was closely related to EEG and behavioral variables, indicating a high validity in measuring sleepiness. KSS ratings may be a useful proxy for EEG or behavioral indicators of sleepiness.

  4. VDA, a Method of Choosing a Better Algorithm with Fewer Validations

    PubMed Central

    Kluger, Yuval

    2011-01-01

    The multitude of bioinformatics algorithms designed for performing a particular computational task presents end-users with the problem of selecting the most appropriate computational tool for analyzing their biological data. The choice of the best available method is often based on expensive experimental validation of the results. We propose an approach to design validation sets for method comparison and performance assessment that are effective in terms of cost and discrimination power. Validation Discriminant Analysis (VDA) is a method for designing a minimal validation dataset to allow reliable comparisons between the performances of different algorithms. Implementation of our VDA approach achieves this reduction by selecting predictions that maximize the minimum Hamming distance between algorithmic predictions in the validation set. We show that VDA can be used to correctly rank algorithms according to their performances. These results are further supported by simulations and by realistic algorithmic comparisons in silico. VDA is a novel, cost-efficient method for minimizing the number of validation experiments necessary for reliable performance estimation and fair comparison between algorithms. Our VDA software is available at http://sourceforge.net/projects/klugerlab/files/VDA/ PMID:22046256

  5. User's Manual for Data for Validating Models for PV Module Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marion, W.; Anderberg, A.; Deline, C.

    2014-04-01

    This user's manual describes performance data measured for flat-plate photovoltaic (PV) modules installed in Cocoa, Florida, Eugene, Oregon, and Golden, Colorado. The data include PV module current-voltage curves and associated meteorological data for approximately one-year periods. These publicly available data are intended to facilitate the validation of existing models for predicting the performance of PV modules, and for the development of new and improved models. For comparing different modeling approaches, using these public data will provide transparency and more meaningful comparisons of the relative benefits.

  6. The Sensed Presence Questionnaire (SenPQ): initial psychometric validation of a measure of the “Sensed Presence” experience

    PubMed Central

    Bell, Vaughan

    2017-01-01

    Background The experience of ‘sensed presence’—a feeling or sense that another entity, individual or being is present despite no clear sensory or perceptual evidence—is known to occur in the general population, appears more frequently in religious or spiritual contexts, and seems to be prominent in certain psychiatric or neurological conditions and may reflect specific functions of social cognition or body-image representation systems in the brain. Previous research has relied on ad-hoc measures of the experience and no specific psychometric scale to measure the experience exists to date. Methods Based on phenomenological description in the literature, we created the 16-item Sensed Presence Questionnaire (SenPQ). We recruited participants from (i) a general population sample, and; (ii) a sample including specific selection for religious affiliation, to complete the SenPQ and additional measures of well-being, schizotypy, social anxiety, social imagery, and spiritual experience. We completed an analysis to test internal reliability, the ability of the SenPQ to distinguish between religious and non-religious participants, and whether the SenPQ was specifically related to positive schizotypical experiences and social imagery. A factor analysis was also conducted to examine underlying latent variables. Results The SenPQ was found to be reliable and valid, with religious participants significantly endorsing more items than non-religious participants, and the scale showing a selective relationship with construct relevant measures. Principal components analysis indicates two potential underlying factors interpreted as reflecting ‘benign’ and ‘malign’ sensed presence experiences. Discussion The SenPQ appears to be a reliable and valid measure of sensed presence experience although further validation in neurological and psychiatric conditions is warranted. PMID:28367379

  7. The family experiences of in-hospital care questionnaire in severe traumatic brain injury (FECQ-TBI): a validation study.

    PubMed

    Anke, Audny; Manskow, Unn Sollid; Friborg, Oddgeir; Røe, Cecilie; Arntzen, Cathrine

    2016-11-28

    Family members are important for support and care of their close relative after severe traumas, and their experiences are vital health care quality indicators. The objective was to describe the development of the Family Experiences of in-hospital Care Questionnaire for family members of patients with severe Traumatic Brain Injury (FECQ-TBI), and to evaluate its psychometric properties and validity. The design of the study is a Norwegian multicentre study inviting 171 family members. The questionnaire developmental process included a literature review, use of an existing instrument (the parent experience of paediatric care questionnaire), focus group with close family members, as well as expert group judgments. Items asking for family care experiences related to acute wards and rehabilitation were included. Several items of the paediatric care questionnaire were removed or the wording of the items was changed to comply with the present purpose. Questions covering experiences with the inpatient rehabilitation period, the discharge phase, the family experiences with hospital facilities, the transfer between departments and the economic needs of the family were added. The developed questionnaire was mailed to the participants. Exploratory factor analyses were used to examine scale structure, in addition to screening for data quality, and analyses of internal consistency and validity. The questionnaire was returned by 122 (71%) of family members. Principal component analysis extracted six dimensions (eigenvalues > 1.0): acute organization and information (10 items), rehabilitation organization (13 items), rehabilitation information (6 items), discharge (4 items), hospital facilities-patients (4 items) and hospital facilities-family (2 items). Items related to the acute phase were comparable to items in the two dimensions of rehabilitation: organization and information. All six subscales had high Cronbach's alpha coefficients >0.80. The construct validity was

  8. Development and validation of the Consumer Quality index instrument to measure the experience and priority of chronic dialysis patients.

    PubMed

    van der Veer, Sabine N; Jager, Kitty J; Visserman, Ella; Beekman, Robert J; Boeschoten, Els W; de Keizer, Nicolette F; Heuveling, Lara; Stronks, Karien; Arah, Onyebuchi A

    2012-08-01

    Patient experience is an established indicator of quality of care. Validated tools that measure both experiences and priorities are lacking for chronic dialysis care, hampering identification of negative experiences that patients actually rate important. We developed two Consumer Quality (CQ) index questionnaires, one for in-centre haemodialysis (CHD) and the other for peritoneal dialysis and home haemodialysis (PHHD) care. The instruments were validated using exploratory factor analyses, reliability analysis of identified scales and assessing the association between reliable scales and global ratings. We investigated opportunities for improvement by combining suboptimal experience with patient priority. Sixteen dialysis centres participated in our study. The pilot CQ index for CHD care consisted of 71 questions. Based on data of 592 respondents, we identified 42 core experience items in 10 scales with Cronbach's α ranging from 0.38 to 0.88; five were reliable (α ≥ 0.70). The instrument identified information on centres' fire procedures as the aspect of care exhibiting the biggest opportunity for improvement. The pilot CQ index PHHD comprised 56 questions. The response of 248 patients yielded 31 core experience items in nine scales with Cronbach's α ranging between 0.53 and 0.85; six were reliable. Information on kidney transplantation during pre-dialysis showed most room for improvement. However, for both types of care, opportunities for improvement were mostly limited. The CQ index reliably and validly captures dialysis patient experience. Overall, most care aspects showed limited room for improvement, mainly because patients participating in our study rated their experience to be optimal. To evaluate items with high priority, but with which relatively few patients have experience, more qualitative instruments should be considered.

  9. Development and validation of the BRIGHTLIGHT Survey, a patient-reported experience measure for young people with cancer.

    PubMed

    Taylor, Rachel M; Fern, Lorna A; Solanki, Anita; Hooker, Louise; Carluccio, Anna; Pye, Julia; Jeans, David; Frere-Smith, Tom; Gibson, Faith; Barber, Julie; Raine, Rosalind; Stark, Dan; Feltbower, Richard; Pearce, Susie; Whelan, Jeremy S

    2015-07-28

    Patient experience is increasingly used as an indicator of high quality care in addition to more traditional clinical end-points. Surveys are generally accepted as appropriate methodology to capture patient experience. No validated patient experience surveys exist specifically for adolescents and young adults (AYA) aged 13-24 years at diagnosis with cancer. This paper describes early work undertaken to develop and validate a descriptive patient experience survey for AYA with cancer that encompasses both their cancer experience and age-related issues. We aimed to develop, with young people, an experience survey meaningful and relevant to AYA to be used in a longitudinal cohort study (BRIGHTLIGHT), ensuring high levels of acceptability to maximise study retention. A three-stage approach was employed: Stage 1 involved developing a conceptual framework, conducting literature/Internet searches and establishing content validity of the survey; Stage 2 confirmed the acceptability of methods of administration and consisted of four focus groups involving 11 young people (14-25 years), three parents and two siblings; and Stage 3 established survey comprehension through telephone-administered cognitive interviews with a convenience sample of 23 young people aged 14-24 years. Stage 1: Two-hundred and thirty eight questions were developed from qualitative reports of young people's cancer and treatment-related experience. Stage 2: The focus groups identified three core themes: (i) issues directly affecting young people, e.g. impact of treatment-related fatigue on ability to complete survey; (ii) issues relevant to the actual survey, e.g. ability to answer questions anonymously; (iii) administration issues, e.g. confusing format in some supporting documents. Stage 3: Cognitive interviews indicated high levels of comprehension requiring minor survey amendments. Collaborating with young people with cancer has enabled a survey of to be developed that is both meaningful to young

  10. EPIC Calibration/Validation Experiment Field Campaign Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, Steven E; Chilson, Phillip; Argrow, Brian

    A field exercise involving several different kinds of Unmanned Aerial Systems (UAS) and supporting instrumentation systems provided by DOE/ARM and NOAA/NSSL was conducted at the ARM SGP site in Lamont, Oklahoma on 29-30 October 2016. This campaign was part of a larger National Oceanic and Atmospheric Administration (NOAA) UAS Program Office program awarded to the National Severe Storms Laboratory (NSSL). named Environmental Profiling and Initiation of Convection (EPIC). The EPIC Field Campaign (Test and Calibration/Validation) proposed to ARM was a test or “dry-run” for a follow-up campaign to be requested for spring/summer 2017. The EPIC project addresses NOAA’s objective tomore » “evaluate options for UAS profiling of the lower atmosphere with applications for severe weather.” The project goal is to demonstrate that fixed-wing and rotary-wing small UAS have the combined potential to provide a unique observing system capable of providing detailed profiles of temperature, moisture, and winds within the atmospheric boundary layer (ABL) to help determine the potential for severe weather development. Specific project objectives are: 1) to develop small UAS capable of acquiring needed wind and thermodynamic profiles and transects of the ABL using one fixed-wing UAS operating in tandem with two different fixed rotary-wing UAS pairs; 2) adapt and test miniaturized, high-precision, and fast-response atmospheric sensors with high accuracy in strong winds characteristic of the pre-convective ABL in Oklahoma; 3) conduct targeted short-duration experiments at the ARM Southern Great Plains site in northern Oklahoma concurrently with a second site to be chosen in “real-time” from the Oklahoma Mesonet in coordination with the (National Weather Service (NWS)-Norman Forecast Office; and 4) gain valuable experience in pursuit of NOAA’s goals for determining the value of airborne, mobile observing systems for monitoring rapidly evolving high-impact severe weather

  11. Impact of Previous Pharmacy Work Experience on Pharmacy School Academic Performance

    PubMed Central

    Mar, Ellena; T-L Tang, Terrill; Sasaki-Hill, Debra; Kuperberg, James R.; Knapp, Katherine

    2010-01-01

    Objectives To determine whether students' previous pharmacy-related work experience was associated with their pharmacy school performance (academic and clinical). Methods The following measures of student academic performance were examined: pharmacy grade point average (GPA), scores on cumulative high-stakes examinations, and advanced pharmacy practice experience (APPE) grades. The quantity and type of pharmacy-related work experience each student performed prior to matriculation was solicited through a student survey instrument. Survey responses were correlated with academic measures, and demographic-based stratified analyses were conducted. Results No significant difference in academic or clinical performance between those students with prior pharmacy experience and those without was identified. Subanalyses by work setting, position type, and substantial pharmacy work experience did not reveal any association with student performance. A relationship was found, however, between age and work experience, ie, older students tended to have more work experience than younger students. Conclusions Prior pharmacy work experience did not affect students' overall academic or clinical performance in pharmacy school. The lack of significant findings may have been due to the inherent practice limitations of nonpharmacist positions, changes in pharmacy education, and the limitations of survey responses. PMID:20498735

  12. Noncredible cognitive performance at clinical evaluation of adult ADHD: An embedded validity indicator in a visuospatial working memory test.

    PubMed

    Fuermaier, Anselm B M; Tucha, Oliver; Koerts, Janneke; Lange, Klaus W; Weisbrod, Matthias; Aschenbrenner, Steffen; Tucha, Lara

    2017-12-01

    The assessment of performance validity is an essential part of the neuropsychological evaluation of adults with attention-deficit/hyperactivity disorder (ADHD). Most available tools, however, are inaccurate regarding the identification of noncredible performance. This study describes the development of a visuospatial working memory test, including a validity indicator for noncredible cognitive performance of adults with ADHD. Visuospatial working memory of adults with ADHD (n = 48) was first compared to the test performance of healthy individuals (n = 48). Furthermore, a simulation design was performed including 252 individuals who were randomly assigned to either a control group (n = 48) or to 1 of 3 simulation groups who were requested to feign ADHD (n = 204). Additional samples of 27 adults with ADHD and 69 instructed simulators were included to cross-validate findings from the first samples. Adults with ADHD showed impaired visuospatial working memory performance of medium size as compared to healthy individuals. Simulation groups committed significantly more errors and had shorter response times as compared to patients with ADHD. Moreover, binary logistic regression analysis was carried out to derive a validity index that optimally differentiates between true and feigned ADHD. ROC analysis demonstrated high classification rates of the validity index, as shown in excellent specificity (95.8%) and adequate sensitivity (60.3%). The visuospatial working memory test as presented in this study therefore appears sensitive in indicating cognitive impairment of adults with ADHD. Furthermore, the embedded validity index revealed promising results concerning the detection of noncredible cognitive performance of adults with ADHD. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Quality Control and Analysis of Microphysical Data Collected in TRMM Aircraft Validation Experiments

    NASA Technical Reports Server (NTRS)

    Heymsfield, Andrew J.

    2004-01-01

    This report summarizes our efforts on the funded project 'Quality Control and Analysis of Microphysical Data Collected in TRMM Airborne Validation Experiments', NASA NAG5-9663, Andrew Heymsfield, P. I. We begin this report by summarizing our activities in FY2000-FY2004. We then present some highlights of our work. The last part of the report lists the publications that have resulted from our funding through this grant.

  14. Student-Directed Video Validation of Psychomotor Skills Performance: A Strategy to Facilitate Deliberate Practice, Peer Review, and Team Skill Sets.

    PubMed

    DeBourgh, Gregory A; Prion, Susan K

    2017-03-22

    Background Essential nursing skills for safe practice are not limited to technical skills, but include abilities for determining salience among clinical data within dynamic practice environments, demonstrating clinical judgment and reasoning, problem-solving abilities, and teamwork competence. Effective instructional methods are needed to prepare new nurses for entry-to-practice in contemporary healthcare settings. Method This mixed-methods descriptive study explored self-reported perceptions of a process to self-record videos for psychomotor skill performance evaluation in a convenience sample of 102 pre-licensure students. Results Students reported gains in confidence and skill acquisition using team skills to record individual videos of skill performance, and described the importance of teamwork, peer support, and deliberate practice. Conclusion Although time consuming, the production of student-directed video validations of psychomotor skill performance is an authentic task with meaningful accountabilities that is well-received by students as an effective, satisfying learner experience to increase confidence and competence in performing psychomotor skills.

  15. Batch Effect Confounding Leads to Strong Bias in Performance Estimates Obtained by Cross-Validation

    PubMed Central

    Delorenzi, Mauro

    2014-01-01

    Background With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences (“batch effects”) as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. Focus The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. Data We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., ‘control’) or group 2 (e.g., ‘treated’). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. Methods We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data. PMID:24967636

  16. Improving the quality of discrete-choice experiments in health: how can we assess validity and reliability?

    PubMed

    Janssen, Ellen M; Marshall, Deborah A; Hauber, A Brett; Bridges, John F P

    2017-12-01

    The recent endorsement of discrete-choice experiments (DCEs) and other stated-preference methods by regulatory and health technology assessment (HTA) agencies has placed a greater focus on demonstrating the validity and reliability of preference results. Areas covered: We present a practical overview of tests of validity and reliability that have been applied in the health DCE literature and explore other study qualities of DCEs. From the published literature, we identify a variety of methods to assess the validity and reliability of DCEs. We conceptualize these methods to create a conceptual model with four domains: measurement validity, measurement reliability, choice validity, and choice reliability. Each domain consists of three categories that can be assessed using one to four procedures (for a total of 24 tests). We present how these tests have been applied in the literature and direct readers to applications of these tests in the health DCE literature. Based on a stakeholder engagement exercise, we consider the importance of study characteristics beyond traditional concepts of validity and reliability. Expert commentary: We discuss study design considerations to assess the validity and reliability of a DCE, consider limitations to the current application of tests, and discuss future work to consider the quality of DCEs in healthcare.

  17. Design validation and performance of closed loop gas recirculation system

    NASA Astrophysics Data System (ADS)

    Kalmani, S. D.; Joshi, A. V.; Majumder, G.; Mondal, N. K.; Shinde, R. R.

    2016-11-01

    A pilot experimental set up of the India Based Neutrino Observatory's ICAL detector has been operational for the last 4 years at TIFR, Mumbai. Twelve glass RPC detectors of size 2 × 2 m2, with a gas gap of 2 mm are under test in a closed loop gas recirculation system. These RPCs are continuously purged individually, with a gas mixture of R134a (C2H2F4), isobutane (iC4H10) and sulphur hexafluoride (SF6) at a steady rate of 360 ml/h to maintain about one volume change a day. To economize gas mixture consumption and to reduce the effluents from being released into the atmosphere, a closed loop system has been designed, fabricated and installed at TIFR. The pressure and flow rate in the loop is controlled by mass flow controllers and pressure transmitters. The performance and integrity of RPCs in the pilot experimental set up is being monitored to assess the effect of periodic fluctuation and transients in atmospheric pressure and temperature, room pressure variation, flow pulsations, uniformity of gas distribution and power failures. The capability of closed loop gas recirculation system to respond to these changes is also studied. The conclusions from the above experiment are presented. The validations of the first design considerations and subsequent modifications have provided improved guidelines for the future design of the engineering module gas system.

  18. Validation of instrumentation to monitor dynamic performance of olympic weightlifters.

    PubMed

    Bruenger, Adam J; Smith, Sarah L; Sands, William A; Leigh, Michael R

    2007-05-01

    The purpose of this study was to validate the accuracy and reliability of the Weightlifting Video Overlay System (WVOS) used by coaches and sport biomechanists at the United States Olympic Training Center. Static trials with the bar set at specific positions and dynamic trials of a power snatch were performed. Static and dynamic values obtained by the WVOS were compared with values obtained by tape measure and standard video kinematic analysis. Coordinate positions (horizontal [X] and vertical [Y]) were compared on both ends (left and right) of the bar. Absolute technical error of measurement between WVOS and kinematic values were calculated (0.97 cm [left X], 0.98 cm [right X], 0.88 cm [left Y], and 0.53 cm [right Y]) for the static data. Pearson correlations for all dynamic trials exceeded r = 0.88. The greatest discrepancies between the 2 measuring systems were found to occur when there was twisting of the bar during the performance. This error was probably due to the location on the bar where the coordinates were measured. The WVOS appears to provide accurate position information when compared with standard kinematics; however, care must be taken in evaluating position measurements if there is a significant amount of twisting in the movement. The WVOS appears to be reliable and valid within reasonable error limits for the determination of weightlifting movement technique.

  19. Development and validation of trauma surgical skills metrics: Preliminary assessment of performance after training.

    PubMed

    Shackelford, Stacy; Garofalo, Evan; Shalin, Valerie; Pugh, Kristy; Chen, Hegang; Pasley, Jason; Sarani, Babak; Henry, Sharon; Bowyer, Mark; Mackenzie, Colin F

    2015-07-01

    Maintaining trauma-specific surgical skills is an ongoing challenge for surgical training programs. An objective assessment of surgical skills is needed. We hypothesized that a validated surgical performance assessment tool could detect differences following a training intervention. We developed surgical performance assessment metrics based on discussion with expert trauma surgeons, video review of 10 experts and 10 novice surgeons performing three vascular exposure procedures and lower extremity fasciotomy on cadavers, and validated the metrics with interrater reliability testing by five reviewers blinded to level of expertise and a consensus conference. We tested these performance metrics in 12 surgical residents (Year 3-7) before and 2 weeks after vascular exposure skills training in the Advanced Surgical Skills for Exposure in Trauma (ASSET) course. Performance was assessed in three areas as follows: knowledge (anatomic, management), procedure steps, and technical skills. Time to completion of procedures was recorded, and these metrics were combined into a single performance score, the Trauma Readiness Index (TRI). Wilcoxon matched-pairs signed-ranks test compared pretraining/posttraining effects. Mean time to complete procedures decreased by 4.3 minutes (from 13.4 minutes to 9.1 minutes). The performance component most improved by the 1-day skills training was procedure steps, completion of which increased by 21%. Technical skill scores improved by 12%. Overall knowledge improved by 3%, with 18% improvement in anatomic knowledge. TRI increased significantly from 50% to 64% with ASSET training. Interrater reliability of the surgical performance assessment metrics was validated with single intraclass correlation coefficient of 0.7 to 0.98. A trauma-relevant surgical performance assessment detected improvements in specific procedure steps and anatomic knowledge taught during a 1-day course, quantified by the TRI. ASSET training reduced time to complete vascular

  20. [Caregiver's health: adaption and validation in a Spanish population of the Experience of Caregiving Inventory (ECI)].

    PubMed

    Crespo-Maraver, Mariacruz; Doval, Eduardo; Fernández-Castro, Jordi; Giménez-Salinas, Jordi; Prat, Gemma; Bonet, Pere

    2018-04-04

    To adapt and to validate the Experience of Caregiving Inventory (ECI) in a Spanish population, providing empirical evidence of its internal consistency, internal structure and validity. Psychometric validation of the adapted version of the ECI. One hundred and seventy-two caregivers (69.2% women), mean age 57.51 years (range: 21-89) participated. Demographic and clinical data, standardized measures (ECI, suffering scale of SCL-90-R, Zarit burden scale) were used. The two scales of negative evaluation of the ECI most related to serious mental disorders (disruptive behaviours [DB] and negative symptoms [NS]) and the two scales of positive appreciation (positive personal experiences [PPE], and good aspects of the relationship [GAR]) were analyzed. Exploratory structural equation modelling was used to analyze the internal structure. The relationship between the ECI scales and the SCL-90-R and Zarit scores was also studied. The four-factor model presented a good fit. Cronbach's alpha (DB: 0.873; NS: 0.825; PPE: 0.720; GAR: 0.578) showed a higher homogeneity in the negative scales. The SCL-90-R scores correlated with the negative ECI scales, and none of the ECI scales correlated with the Zarit scale. The Spanish version of the ECI can be considered a valid, reliable, understandable and feasible self-report measure for its administration in the health and community context. Copyright © 2018 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  1. Validation of the INCEPT: A Multisource Feedback Tool for Capturing Different Perspectives on Physicians' Professional Performance.

    PubMed

    van der Meulen, Mirja W; Boerebach, Benjamin C M; Smirnova, Alina; Heeneman, Sylvia; Oude Egbrink, Mirjam G A; van der Vleuten, Cees P M; Arah, Onyebuchi A; Lombarts, Kiki M J M H

    2017-01-01

    Multisource feedback (MSF) instruments are used to and must feasibly provide reliable and valid data on physicians' performance from multiple perspectives. The "INviting Co-workers to Evaluate Physicians Tool" (INCEPT) is a multisource feedback instrument used to evaluate physicians' professional performance as perceived by peers, residents, and coworkers. In this study, we report on the validity, reliability, and feasibility of the INCEPT. The performance of 218 physicians was assessed by 597 peers, 344 residents, and 822 coworkers. Using explorative and confirmatory factor analyses, multilevel regression analyses between narrative and numerical feedback, item-total correlations, interscale correlations, Cronbach's α and generalizability analyses, the psychometric qualities, and feasibility of the INCEPT were investigated. For all respondent groups, three factors were identified, although constructed slightly different: "professional attitude," "patient-centeredness," and "organization and (self)-management." Internal consistency was high for all constructs (Cronbach's α ≥ 0.84 and item-total correlations ≥ 0.52). Confirmatory factor analyses indicated acceptable to good fit. Further validity evidence was given by the associations between narrative and numerical feedback. For reliable total INCEPT scores, three peer, two resident and three coworker evaluations were needed; for subscale scores, evaluations of three peers, three residents and three to four coworkers were sufficient. The INCEPT instrument provides physicians performance feedback in a valid and reliable way. The number of evaluations to establish reliable scores is achievable in a regular clinical department. When interpreting feedback, physicians should consider that respondent groups' perceptions differ as indicated by the different item clustering per performance factor.

  2. Unsteady Aerodynamic Validation Experiences From the Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Chawlowski, Pawel

    2014-01-01

    The AIAA Aeroelastic Prediction Workshop (AePW) was held in April 2012, bringing together communities of aeroelasticians, computational fluid dynamicists and experimentalists. The extended objective was to assess the state of the art in computational aeroelastic methods as practical tools for the prediction of static and dynamic aeroelastic phenomena. As a step in this process, workshop participants analyzed unsteady aerodynamic and weakly-coupled aeroelastic cases. Forced oscillation and unforced system experiments and computations have been compared for three configurations. This paper emphasizes interpretation of the experimental data, computational results and their comparisons from the perspective of validation of unsteady system predictions. The issues examined in detail are variability introduced by input choices for the computations, post-processing, and static aeroelastic modeling. The final issue addressed is interpreting unsteady information that is present in experimental data that is assumed to be steady, and the resulting consequences on the comparison data sets.

  3. Development and Validation of a Rating Scale for Wind Jazz Improvisation Performance

    ERIC Educational Resources Information Center

    Smith, Derek T.

    2009-01-01

    The purpose of this study was to construct and validate a rating scale for collegiate wind jazz improvisation performance. The 14-item Wind Jazz Improvisation Evaluation Scale (WJIES) was constructed and refined through a facet-rational approach to scale development. Five wind jazz students and one professional jazz educator were asked to record…

  4. The NASA CloudSat/GPM Light Precipitation Validation Experiment (LPVEx)

    NASA Technical Reports Server (NTRS)

    Petersen, Walter A.; L'Ecuyer, Tristan; Moisseev, Dmitri

    2011-01-01

    Ground-based measurements of cool-season precipitation at mid and high latitudes (e.g., above 45 deg N/S) suggest that a significant fraction of the total precipitation volume falls in the form of light rain, i.e., at rates less than or equal to a few mm/h. These cool-season light rainfall events often originate in situations of a low-altitude (e.g., lower than 2 km) melting level and pose a significant challenge to the fidelity of all satellite-based precipitation measurements, especially those relying on the use of multifrequency passive microwave (PMW) radiometers. As a result, significant disagreements exist between satellite estimates of rainfall accumulation poleward of 45 deg. Ongoing efforts to develop, improve, and ultimately evaluate physically-based algorithms designed to detect and accurately quantify high latitude rainfall, however, suffer from a general lack of detailed, observationally-based ground validation datasets. These datasets serve as a physically consistent framework from which to test and refine algorithm assumptions, and as a means to build the library of algorithm retrieval databases in higher latitude cold-season light precipitation regimes. These databases are especially relevant to NASA's CloudSat and Global Precipitation Measurement (GPM) ground validation programs that are collecting high-latitude precipitation measurements in meteorological systems associated with frequent coolseason light precipitation events. In an effort to improve the inventory of cool-season high-latitude light precipitation databases and advance the physical process assumptions made in satellite-based precipitation retrieval algorithm development, the CloudSat and GPM mission ground validation programs collaborated with the Finnish Meteorological Institute (FMI), the University of Helsinki (UH), and Environment Canada (EC) to conduct the Light Precipitation Validation Experiment (LPVEx). The LPVEx field campaign was designed to make detailed measurements of

  5. Advances in Experiment Design for High Performance Aircraft

    NASA Technical Reports Server (NTRS)

    Morelli, Engene A.

    1998-01-01

    A general overview and summary of recent advances in experiment design for high performance aircraft is presented, along with results from flight tests. General theoretical background is included, with some discussion of various approaches to maneuver design. Flight test examples from the F-18 High Alpha Research Vehicle (HARV) are used to illustrate applications of the theory. Input forms are compared using Cramer-Rao bounds for the standard errors of estimated model parameters. Directions for future research in experiment design for high performance aircraft are identified.

  6. PASTIS: Bayesian extrasolar planet validation - I. General framework, models, and performance

    NASA Astrophysics Data System (ADS)

    Díaz, R. F.; Almenara, J. M.; Santerne, A.; Moutou, C.; Lethuillier, A.; Deleuil, M.

    2014-06-01

    A large fraction of the smallest transiting planet candidates discovered by the Kepler and CoRoT space missions cannot be confirmed by a dynamical measurement of the mass using currently available observing facilities. To establish their planetary nature, the concept of planet validation has been advanced. This technique compares the probability of the planetary hypothesis against that of all reasonably conceivable alternative false positive (FP) hypotheses. The candidate is considered as validated if the posterior probability of the planetary hypothesis is sufficiently larger than the sum of the probabilities of all FP scenarios. In this paper, we present PASTIS, the Planet Analysis and Small Transit Investigation Software, a tool designed to perform a rigorous model comparison of the hypotheses involved in the problem of planet validation, and to fully exploit the information available in the candidate light curves. PASTIS self-consistently models the transit light curves and follow-up observations. Its object-oriented structure offers a large flexibility for defining the scenarios to be compared. The performance is explored using artificial transit light curves of planets and FPs with a realistic error distribution obtained from a Kepler light curve. We find that data support the correct hypothesis strongly only when the signal is high enough (transit signal-to-noise ratio above 50 for the planet case) and remain inconclusive otherwise. PLAnetary Transits and Oscillations of stars (PLATO) shall provide transits with high enough signal-to-noise ratio, but to establish the true nature of the vast majority of Kepler and CoRoT transit candidates additional data or strong reliance on hypotheses priors is needed.

  7. Directed Design of Experiments for Validating Probability of Detection Capability of NDE Systems (DOEPOD)

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2015-01-01

    Directed Design of Experiments for Validating Probability of Detection Capability of NDE Systems (DOEPOD) Manual v.1.2 The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that there is 95% confidence that the POD is greater than 90% (90/95 POD). Design of experiments for validating probability of detection capability of nondestructive evaluation (NDE) systems (DOEPOD) is a methodology that is implemented via software to serve as a diagnostic tool providing detailed analysis of POD test data, guidance on establishing data distribution requirements, and resolving test issues. DOEPOD demands utilization of observance of occurrences. The DOEPOD capability has been developed to provide an efficient and accurate methodology that yields observed POD and confidence bounds for both Hit-Miss or signal amplitude testing. DOEPOD does not assume prescribed POD logarithmic or similar functions with assumed adequacy over a wide range of flaw sizes and inspection system technologies, so that multi-parameter curve fitting or model optimization approaches to generate a POD curve are not required. DOEPOD applications for supporting inspector qualifications is included.

  8. DC-8 and ER-2 in Sweden for the Sage III Ozone Loss and Validation Experiment (SOLVE)

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This 48 second video shows Dryden's Airborne Science aircraft in Kiruna Sweden in January 2000. The DC-8 and ER-2 conducted atmospheric studies for the Sage III Ozone Loss and Validation Experiment (SOLVE).

  9. Benchmarking the ATLAS software through the Kit Validation engine

    NASA Astrophysics Data System (ADS)

    De Salvo, Alessandro; Brasolin, Franco

    2010-04-01

    The measurement of the experiment software performance is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementation. In this work we present the benchmark techniques used to measure the ATLAS software performance through the ATLAS offline testing engine Kit Validation and the online portal Global Kit Validation. The performance measurements, the data collection, the online analysis and display of the results will be presented. The results of the measurement on different platforms and architectures will be shown, giving a full report on the CPU power and memory consumption of the Monte Carlo generation, simulation, digitization and reconstruction of the most CPU-intensive channels. The impact of the multi-core computing on the ATLAS software performance will also be presented, comparing the behavior of different architectures when increasing the number of concurrent processes. The benchmark techniques described in this paper have been used in the HEPiX group since the beginning of 2008 to help defining the performance metrics for the High Energy Physics applications, based on the real experiment software.

  10. Data Link Performance Analysis for LVLASO Experiments

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1998-01-01

    Low-visibility Landing and Surface Operations System (LVLASO) is currently being prototyped and tested at NASA Langley Research Center. Since the main objective of the system is to maintain the aircraft landings and take-offs even during low-visibility conditions, timely exchange of positional and other information between the aircraft and the ground control is critical. For safety and reliability reasons, there are several redundant sources on the ground (e.g., ASDE, AMASS) that collect and disseminate information about the environment to the aircrafts. The data link subsystem of LVLASO is responsible for supporting the timely transfer of information between the aircrafts and the ground controllers. In fact, if not properly designed, the data link subsystem could become a bottleneck in the proper functioning of LVLASO. Currently, the other components of the system are being designed assuming that the data link has adequate capacity and is capable of delivering the information in a timely manner. During August 1-28, 1997, several flight experiments were conducted to test the prototypes of subsystems developed under LVLASO project, The back-round and details of the tests are described in the next section. The test results have been collected in two CDs by FAA and Rockwell-Collins. Under the current grant, we have analyzed the data and evaluated the performance of the Mode S datalink. In this report, we summarize the results of our analysis. Much of the results are shown in terms of graphs or histograms. The test date (or experiment number) was often taken as the X-axis and the Y-axis denotes whatever metric of focus in that chart. In interpreting these charts, one need to take into account the vehicular traffic during a particular experiment. In general, the performance of the data link was found to be quite satisfactory in terms of delivering long and short Mode S squitters from the vehicles to the ground receiver, Similarly, its performance in delivering control

  11. The Reliability and Validity of Protocols for the Assessment of Endurance Sports Performance: An Updated Review

    ERIC Educational Resources Information Center

    Stevens, Christopher John; Dascombe, Ben James

    2015-01-01

    Sports performance testing is one of the most common and important measures used in sport science. Performance testing protocols must have high reliability to ensure any changes are not due to measurement error or inter-individual differences. High validity is also important to ensure test performance reflects true performance. Time-trial…

  12. Skylab experiment performance evaluation manual. Appendix S: Experiment T027 contamination measurement sample array (MSFC)

    NASA Technical Reports Server (NTRS)

    Tonetti, B. B.

    1973-01-01

    Analyses for Experiment T027, Contamination Measurement Sample Array (MSFC), to be used for evaluating the performance of the Skylab corrollary experiments under preflight, inflight, and post-flight conditions are presented. Experiment contingency plan workaround procedure and malfunction analyses are presented in order to assist in making the experiment operationally successful.

  13. Students' and Teacher's Experiences of the Validity and Reliability of Assessment in a Bioscience Course

    ERIC Educational Resources Information Center

    Räisänen, Milla; Tuononen, Tarja; Postareff, Liisa; Hailikari, Telle; Virtanen, Viivi

    2016-01-01

    This case study explores the assessment of students' learning outcomes in a second-year lecture course in biosciences. The aim is to deeply explore the teacher's and the students' experiences of the validity and reliability of assessment and to compare those perspectives. The data were collected through stimulated recall interviews. The results…

  14. Securing Valid Information for Evaluation of Job Performance of the University Faculty.

    ERIC Educational Resources Information Center

    Donavan, Bruce

    Approaches to obtaining valid information for evaluating faculty and the issue of alcoholism and job performance are addressed. Among the complications to this undertaking is the existence of an invalid self-perception on the part of faculty that they are not employees of the institution, and a tolerance among faculty for deviance or eccentricity.…

  15. RELIABILITY AND VALIDITY OF AN ACCELEROMETRIC SYSTEM FOR ASSESSING VERTICAL JUMPING PERFORMANCE

    PubMed Central

    Laffaye, G.; Taiar, R.

    2014-01-01

    The validity of an accelerometric system (Myotest©) for assessing vertical jump height, vertical force and power, leg stiffness and reactivity index was examined. 20 healthy males performed 3ד5 hops in place”, 3ד1 squat jump” and 3× “1 countermovement jump” during 2 test-retest sessions. The variables were simultaneously assessed using an accelerometer and a force platform at a frequency of 0.5 and 1 kHz, respectively. Both reliability and validity of the accelerometric system were studied. No significant differences between test and retest data were found (p < 0.05), showing a high level of reliability. Besides, moderate to high intraclass correlation coefficients (ICCs) (from 0.74 to 0.96) were obtained for all variables whereas weak to moderate ICCs (from 0.29 to 0.79) were obtained for force and power during the countermovement jump. With regards to validity, the difference between the two devices was not significant for 5 hops in place height (1.8 cm), force during squat (-1.4 N · kg−1) and countermovement (0.1 N · kg−1) jumps, leg stiffness (7.8 kN · m−1) and reactivity index (0.4). So, the measurements of these variables with this accelerometer are valid, which is not the case for the other variables. The main causes of non-validity for velocity, power and contact time assessment are temporal biases of the takeoff and touchdown moments detection. PMID:24917690

  16. Skylab experiment performance evaluation manual. Appendix N: Experiment S183 ultraviolet panorama (MSFC), revision 1

    NASA Technical Reports Server (NTRS)

    Purushotham, K. S.

    1972-01-01

    A series is presented of analyses for Experiment S183, Ultraviolet Panorama (MSFC), to be used for evaluating the performance of the Skylab corollary experiments under preflight, inflight, and post-flight conditions. Experiment contingency plan workaround procedure and malfunction analyses are presented in order to assist in making the experiment operationally successful.

  17. Skylab experiment performance evaluation manual. Appendix P: Experiment T003 inflight aerosol analysis (DOT/MSFC)

    NASA Technical Reports Server (NTRS)

    Purushotham, K. S.

    1972-01-01

    A series of analyses is presented for experiment T003, inflight aerosol analysis, to be used for evaluating the performance of the Skylab corollary experiments under preflight, inflight, and post-flight conditions. Experiment contingency plan workaround procedure and malfunction analyses are presented in order to assist in making the experiment operationally successful.

  18. Performance Contracting: A Forgotten Experiment in School Privatization.

    ERIC Educational Resources Information Center

    Ascher, Carol

    1996-01-01

    During the early 1970s, over 150 school districts and several states contracted with private companies to deliver instruction, and the Nixon Administration initiated a vast privatization field experiment in Texarkana. None of these performance contracting experiments significantly improved instruction. Instead, they raised issues of staffing,…

  19. STORMVEX: The Storm Peak Lab Cloud Property Validation Experiment Science and Operations Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mace, J; Matrosov, S; Shupe, M

    2010-09-29

    During the Storm Peak Lab Cloud Property Validation Experiment (STORMVEX), a substantial correlative data set of remote sensing observations and direct in situ measurements from fixed and airborne platforms will be created in a winter season, mountainous environment. This will be accomplished by combining mountaintop observations at Storm Peak Laboratory and the airborne National Science Foundation-supported Colorado Airborne Multi-Phase Cloud Study campaign with collocated measurements from the second ARM Mobile Facility (AMF2). We describe in this document the operational plans and motivating science for this experiment, which includes deployment of AMF2 to Steamboat Springs, Colorado. The intensive STORMVEX field phasemore » will begin nominally on 1 November 2010 and extend to approximately early April 2011.« less

  20. Symptom and performance validity with veterans assessed for attention-deficit/hyperactivity disorder (ADHD).

    PubMed

    Shura, Robert D; Denning, John H; Miskey, Holly M; Rowland, Jared A

    2017-12-01

    Little is known about attention-deficit/hyperactivity disorder (ADHD) in veterans. Practice standards recommend the use of both symptom and performance validity measures in any assessment, and there are salient external incentives associated with ADHD evaluation (stimulant medication access and academic accommodations). The purpose of this study was to evaluate symptom and performance validity measures in a clinical sample of veterans presenting for specialty ADHD evaluation. Patients without a history of a neurocognitive disorder and for whom data were available on all measures (n = 114) completed a clinical interview structured on DSM-5 ADHD symptoms, the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF), and the Test of Memory Malingering Trial 1 (TOMM1) as part of a standardized ADHD diagnostic evaluation. Veterans meeting criteria for ADHD were not more likely to overreport symptoms on the MMPI-2-RF nor to fail TOMM1 (score ≤ 41) compared with those who did not meet criteria. Those who overreported symptoms did not endorse significantly more ADHD symptoms; however, those who failed TOMM1 did report significantly more ADHD symptoms (g = 0.90). In the total sample, 19.3% failed TOMM1, 44.7% overreported on the MMPI-2-RF, and 8.8% produced both an overreported MMPI-2-RF and invalid TOMM1. F-r had the highest correlation to TOMM1 scores (r = -.30). These results underscore the importance of assessing both symptom and performance validity in a clinical ADHD evaluation with veterans. In contrast to certain other conditions (e.g., mild traumatic brain injury), ADHD as a diagnosis is not related to higher rates of invalid report/performance in veterans. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Embedded measures of performance validity using verbal fluency tests in a clinical sample.

    PubMed

    Sugarman, Michael A; Axelrod, Bradley N

    2015-01-01

    The objective of this study was to determine to what extent verbal fluency measures can be used as performance validity indicators during neuropsychological evaluation. Participants were clinically referred for neuropsychological evaluation in an urban-based Veteran's Affairs hospital. Participants were placed into 2 groups based on their objectively evaluated effort on performance validity tests (PVTs). Individuals who exhibited credible performance (n = 431) failed 0 PVTs, and those with poor effort (n = 192) failed 2 or more PVTs. All participants completed the Controlled Oral Word Association Test (COWAT) and Animals verbal fluency measures. We evaluated how well verbal fluency scores could discriminate between the 2 groups. Raw scores and T scores for Animals discriminated between the credible performance and poor-effort groups with 90% specificity and greater than 40% sensitivity. COWAT scores had lower sensitivity for detecting poor effort. A combination of FAS and Animals scores into logistic regression models yielded acceptable group classification, with 90% specificity and greater than 44% sensitivity. Verbal fluency measures can yield adequate detection of poor effort during neuropsychological evaluation. We provide suggested cut points and logistic regression models for predicting the probability of poor effort in our clinical setting and offer suggested cutoff scores to optimize sensitivity and specificity.

  2. Duque performs VIDEO-2 (VID-01) experiment

    NASA Image and Video Library

    2003-10-23

    ISS007-E-17848 (23 October 2003) --- Cosmonaut Alexander Y. Kaleri (right), Expedition 8 flight engineer, uses a camera to film a scientific experiment performed by European Space Agency (ESA) astronaut Pedro Duque of Spain in the Zvezda Service Module on the International Space Station (ISS). Kaleri represents Rosaviakosmos. Duque and Kaleri performed the European educational VIDEO-2 (VID-01) experiment, which uses the Russian DSR PD-150P digital video camcorder for recording demos of several basic physical phenomena, viz., Isaac Newton's three motion laws, with narration. [The demo made use of a sealed bag containing coffee and a syringe to fill one of two hollow balls with the brown liquid (to provide "mass", as opposed to the other, "mass-less" ball).

  3. Testing and validating environmental models

    USGS Publications Warehouse

    Kirchner, J.W.; Hooper, R.P.; Kendall, C.; Neal, C.; Leavesley, G.

    1996-01-01

    Generally accepted standards for testing and validating ecosystem models would benefit both modellers and model users. Universally applicable test procedures are difficult to prescribe, given the diversity of modelling approaches and the many uses for models. However, the generally accepted scientific principles of documentation and disclosure provide a useful framework for devising general standards for model evaluation. Adequately documenting model tests requires explicit performance criteria, and explicit benchmarks against which model performance is compared. A model's validity, reliability, and accuracy can be most meaningfully judged by explicit comparison against the available alternatives. In contrast, current practice is often characterized by vague, subjective claims that model predictions show 'acceptable' agreement with data; such claims provide little basis for choosing among alternative models. Strict model tests (those that invalid models are unlikely to pass) are the only ones capable of convincing rational skeptics that a model is probably valid. However, 'false positive' rates as low as 10% can substantially erode the power of validation tests, making them insufficiently strict to convince rational skeptics. Validation tests are often undermined by excessive parameter calibration and overuse of ad hoc model features. Tests are often also divorced from the conditions under which a model will be used, particularly when it is designed to forecast beyond the range of historical experience. In such situations, data from laboratory and field manipulation experiments can provide particularly effective tests, because one can create experimental conditions quite different from historical data, and because experimental data can provide a more precisely defined 'target' for the model to hit. We present a simple demonstration showing that the two most common methods for comparing model predictions to environmental time series (plotting model time series

  4. Construct validity of the ovine model in endoscopic sinus surgery training.

    PubMed

    Awad, Zaid; Taghi, Ali; Sethukumar, Priya; Tolley, Neil S

    2015-03-01

    To demonstrate construct validity of the ovine model as a tool for training in endoscopic sinus surgery (ESS). Prospective, cross-sectional evaluation study. Over 18 consecutive months, trainees and experts were evaluated in their ability to perform a range of tasks (based on previous face validation and descriptive studies conducted by the same group) relating to ESS on the sheep-head model. Anonymized randomized video recordings of the above were assessed by two independent and blinded assessors. A validated assessment tool utilizing a five-point Likert scale was employed. Construct validity was calculated by comparing scores across training levels and experts using mean and interquartile range of global and task-specific scores. Subgroup analysis of the intermediate group ascertained previous experience. Nonparametric descriptive statistics were used, and analysis was carried out using SPSS version 21 (IBM, Armonk, NY). Reliability of the assessment tool was confirmed. The model discriminated well between different levels of expertise in global and task-specific scores. A positive correlation was noted between year in training and both global and task-specific scores (P < .001). Experience of the intermediate group was variable, and the number of ESS procedures performed under supervision had the highest impact on performance. This study describes an alternative model for ESS training and assessment. It is also the first to demonstrate construct validity of the sheep-head model for ESS training. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  5. MRPrimerW: a tool for rapid design of valid high-quality primers for multiple target qPCR experiments

    PubMed Central

    Kim, Hyerin; Kang, NaNa; An, KyuHyeon; Koo, JaeHyung; Kim, Min-Soo

    2016-01-01

    Design of high-quality primers for multiple target sequences is essential for qPCR experiments, but is challenging due to the need to consider both homology tests on off-target sequences and the same stringent filtering constraints on the primers. Existing web servers for primer design have major drawbacks, including requiring the use of BLAST-like tools for homology tests, lack of support for ranking of primers, TaqMan probes and simultaneous design of primers against multiple targets. Due to the large-scale computational overhead, the few web servers supporting homology tests use heuristic approaches or perform homology tests within a limited scope. Here, we describe the MRPrimerW, which performs complete homology testing, supports batch design of primers for multi-target qPCR experiments, supports design of TaqMan probes and ranks the resulting primers to return the top-1 best primers to the user. To ensure high accuracy, we adopted the core algorithm of a previously reported MapReduce-based method, MRPrimer, but completely redesigned it to allow users to receive query results quickly in a web interface, without requiring a MapReduce cluster or a long computation. MRPrimerW provides primer design services and a complete set of 341 963 135 in silico validated primers covering 99% of human and mouse genes. Free access: http://MRPrimerW.com. PMID:27154272

  6. Development, Validation and Integration of the ATLAS Trigger System Software in Run 2

    NASA Astrophysics Data System (ADS)

    Keyes, Robert; ATLAS Collaboration

    2017-10-01

    The trigger system of the ATLAS detector at the LHC is a combination of hardware, firmware, and software, associated to various sub-detectors that must seamlessly cooperate in order to select one collision of interest out of every 40,000 delivered by the LHC every millisecond. These proceedings discuss the challenges, organization and work flow of the ongoing trigger software development, validation, and deployment. The goal of this development is to ensure that the most up-to-date algorithms are used to optimize the performance of the experiment. The goal of the validation is to ensure the reliability and predictability of the software performance. Integration tests are carried out to ensure that the software deployed to the online trigger farm during data-taking run as desired. Trigger software is validated by emulating online conditions using a benchmark run and mimicking the reconstruction that occurs during normal data-taking. This exercise is computationally demanding and thus runs on the ATLAS high performance computing grid with high priority. Performance metrics ranging from low-level memory and CPU requirements, to distributions and efficiencies of high-level physics quantities are visualized and validated by a range of experts. This is a multifaceted critical task that ties together many aspects of the experimental effort and thus directly influences the overall performance of the ATLAS experiment.

  7. Validation Database Based Thermal Analysis of an Advanced RPS Concept

    NASA Technical Reports Server (NTRS)

    Balint, Tibor S.; Emis, Nickolas D.

    2006-01-01

    Advanced RPS concepts can be conceived, designed and assessed using high-end computational analysis tools. These predictions may provide an initial insight into the potential performance of these models, but verification and validation are necessary and required steps to gain confidence in the numerical analysis results. This paper discusses the findings from a numerical validation exercise for a small advanced RPS concept, based on a thermal analysis methodology developed at JPL and on a validation database obtained from experiments performed at Oregon State University. Both the numerical and experimental configurations utilized a single GPHS module enabled design, resembling a Mod-RTG concept. The analysis focused on operating and environmental conditions during the storage phase only. This validation exercise helped to refine key thermal analysis and modeling parameters, such as heat transfer coefficients, and conductivity and radiation heat transfer values. Improved understanding of the Mod-RTG concept through validation of the thermal model allows for future improvements to this power system concept.

  8. An Inventory Battery to Predict Navy and Marine Corps Recruiter Performance: Development and Validation

    DTIC Science & Technology

    1979-05-01

    Cross-Validation Strategies to Diferent Sections of the Predictor Batery ..................... 27 Personality Scales . . . . . . . . . . . . 0. . a...he generated several performance indices on the basis of assumptions about the recruiting environment and geographical differences in production

  9. Reliability and validity of functional performance tests in dancers with hip dysfunction.

    PubMed

    Kivlan, Benjamin R; Carcia, Christopher R; Clemente, F Richard; Phelps, Amy L; Martin, Robroy L

    2013-08-01

    Quasi-experimental, repeated measures. Functional performance tests that identify hip joint impairments and assess the effect of intervention have not been adequately described for dancers. The purpose of this study was to examine the reliability and validity of hop and balance tests among a group of dancers with musculoskeletal pain in the hip region. NINETEEN FEMALE DANCERS (AGE: 18.90±1.11 years; height: 164.85±6.95 cm; weight: 60.37±8.29 kg) with unilateral hip pain were assessed utilizing the cross-over reach, medial triple hop, lateral triple hop, and cross-over hop tests on two occasions, 2 days apart. Test-retest reliability and comparisons between the involved and uninvolved side for each respective test were determined. Intra-class correlation coefficients for the functional performance tests ranged from 0.89-0.96. The cross-over reach test had a SEM of 2.79 cm and a MDC of 7.73 cm. The medial and lateral triple hop tests had SEM values of 7.51 cm and 8.17 cm, and MDC values of 20.81 cm and 22.62 cm, respectively. The SEM was 0.15 seconds and the MDC was 0.42 seconds for the cross-over hop test. Performance on the medial triple hop test was significantly less on the involved side (370.21±38.26 cm) compared to the uninvolved side (388.05±41.49 cm); t(18) = -4.33, p<0.01. The side-to-side comparisons of the cross-over reach test (involved mean=61.68±10.9 cm; uninvolved mean=61.69±8.63 cm); t(18) = -0.004, p=0.99, lateral triple hop test (involved mean=306.92±35.79 cm; uninvolved mean=310.68±24.49 cm); t(18) = -0.55, p=0.59, and cross-over hop test (involved mean=2.49±0.34 seconds; uninvolved mean= 2.61±0.42 seconds; t(18) = -1.84, p=0.08) were not statistically different between sides. The functional performance tests used in this study can be reliably performed on dancers with unilateral hip pain. The medial triple hop test was the only functional performance test with evidence of validity in side-to-side comparisons. These results suggest that

  10. RELIABILITY AND VALIDITY OF FUNCTIONAL PERFORMANCE TESTS IN DANCERS WITH HIP DYSFUNCTION

    PubMed Central

    Carcia, Christopher R.; Clemente, F. Richard; Phelps, Amy L.; Martin, RobRoy L.

    2013-01-01

    Study Design: Quasi-experimental, repeated measures. Purpose/Background: Functional performance tests that identify hip joint impairments and assess the effect of intervention have not been adequately described for dancers. The purpose of this study was to examine the reliability and validity of hop and balance tests among a group of dancers with musculoskeletal pain in the hip region. Methods: Nineteen female dancers (age: 18.90±1.11 years; height: 164.85±6.95 cm; weight: 60.37±8.29 kg) with unilateral hip pain were assessed utilizing the cross-over reach, medial triple hop, lateral triple hop, and cross-over hop tests on two occasions, 2 days apart. Test-retest reliability and comparisons between the involved and uninvolved side for each respective test were determined. Results: Intra-class correlation coefficients for the functional performance tests ranged from 0.89-0.96. The cross-over reach test had a SEM of 2.79 cm and a MDC of 7.73 cm. The medial and lateral triple hop tests had SEM values of 7.51 cm and 8.17 cm, and MDC values of 20.81 cm and 22.62 cm, respectively. The SEM was 0.15 seconds and the MDC was 0.42 seconds for the cross-over hop test. Performance on the medial triple hop test was significantly less on the involved side (370.21±38.26 cm) compared to the uninvolved side (388.05±41.49 cm); t(18) = −4.33, p<0.01. The side-to-side comparisons of the cross-over reach test (involved mean=61.68±10.9 cm; uninvolved mean=61.69±8.63 cm); t(18) = −0.004, p=0.99, lateral triple hop test (involved mean=306.92±35.79 cm; uninvolved mean=310.68±24.49 cm); t(18) = −0.55, p=0.59, and cross-over hop test (involved mean=2.49±0.34 seconds; uninvolved mean= 2.61±0.42 seconds; t(18) = −1.84, p=0.08) were not statistically different between sides. Conclusion: The functional performance tests used in this study can be reliably performed on dancers with unilateral hip pain. The medial triple hop test was the only functional performance test with

  11. The COSIMA experiments and their verification, a data base for the validation of two phase flow computer codes

    NASA Astrophysics Data System (ADS)

    Class, G.; Meyder, R.; Stratmanns, E.

    1985-12-01

    The large data base for validation and development of computer codes for two-phase flow, generated at the COSIMA facility, is reviewed. The aim of COSIMA is to simulate the hydraulic, thermal, and mechanical conditions in the subchannel and the cladding of fuel rods in pressurized water reactors during the blowout phase of a loss of coolant accident. In terms of fuel rod behavior, it is found that during blowout under realistic conditions only small strains are reached. For cladding rupture extremely high rod internal pressures are necessary. The behavior of fuel rod simulators and the effect of thermocouples attached to the cladding outer surface are clarified. Calculations performed with the codes RELAP and DRUFAN show satisfactory agreement with experiments. This can be improved by updating the phase separation models in the codes.

  12. Observations on CFD Verification and Validation from the AIAA Drag Prediction Workshops

    NASA Technical Reports Server (NTRS)

    Morrison, Joseph H.; Kleb, Bil; Vassberg, John C.

    2014-01-01

    The authors provide observations from the AIAA Drag Prediction Workshops that have spanned over a decade and from a recent validation experiment at NASA Langley. These workshops provide an assessment of the predictive capability of forces and moments, focused on drag, for transonic transports. It is very difficult to manage the consistency of results in a workshop setting to perform verification and validation at the scientific level, but it may be sufficient to assess it at the level of practice. Observations thus far: 1) due to simplifications in the workshop test cases, wind tunnel data are not necessarily the “correct” results that CFD should match, 2) an average of core CFD data are not necessarily a better estimate of the true solution as it is merely an average of other solutions and has many coupled sources of variation, 3) outlier solutions should be investigated and understood, and 4) the DPW series does not have the systematic build up and definition on both the computational and experimental side that is required for detailed verification and validation. Several observations regarding the importance of the grid, effects of physical modeling, benefits of open forums, and guidance for validation experiments are discussed. The increased variation in results when predicting regions of flow separation and increased variation due to interaction effects, e.g., fuselage and horizontal tail, point out the need for validation data sets for these important flow phenomena. Experiences with a recent validation experiment at NASA Langley are included to provide guidance on validation experiments.

  13. Validation of a Dry Model for Assessing the Performance of Arthroscopic Hip Labral Repair.

    PubMed

    Phillips, Lisa; Cheung, Jeffrey J H; Whelan, Daniel B; Murnaghan, Michael Lucas; Chahal, Jas; Theodoropoulos, John; Ogilvie-Harris, Darrell; Macniven, Ian; Dwyer, Tim

    2017-07-01

    Arthroscopic hip labral repair is a technically challenging and demanding surgical technique with a steep learning curve. Arthroscopic simulation allows trainees to develop these skills in a safe environment. The purpose of this study was to evaluate the use of a combination of assessment ratings for the performance of arthroscopic hip labral repair on a dry model. Cross-sectional study; Level of evidence, 3. A total of 47 participants including orthopaedic surgery residents (n = 37), sports medicine fellows (n = 5), and staff surgeons (n = 5) performed arthroscopic hip labral repair on a dry model. Prior arthroscopic experience was noted. Participants were evaluated by 2 orthopaedic surgeons using a task-specific checklist, the Arthroscopic Surgical Skill Evaluation Tool (ASSET), task completion time, and a final global rating scale. All procedures were video-recorded and scored by an orthopaedic fellow blinded to the level of training of each participant. The internal consistency/reliability (Cronbach alpha) using the total ASSET score for the procedure was high (intraclass correlation coefficient > 0.9). One-way analysis of variance for the total ASSET score demonstrated a difference between participants based on the level of training ( F 3,43 = 27.8, P < .001). A good correlation was seen between the ASSET score and previous exposure to arthroscopic procedures ( r = 0.52-0.73, P < .001). The interrater reliability for the ASSET score was excellent (>0.9). The results of this study demonstrate that the use of dry models to assess the performance of arthroscopic hip labral repair by trainees is both valid and reliable. Further research will be required to demonstrate a correlation with performance on cadaveric specimens or in the operating room.

  14. Reproducibility, Reliability, and Validity of Fuchsin-Based Beads for the Evaluation of Masticatory Performance.

    PubMed

    Sánchez-Ayala, Alfonso; Farias-Neto, Arcelino; Vilanova, Larissa Soares Reis; Costa, Marina Abrantes; Paiva, Ana Clara Soares; Carreiro, Adriana da Fonte Porto; Mestriner-Junior, Wilson

    2016-08-01

    Rehabilitation of masticatory function is inherent to prosthodontics; however, despite the various techniques for evaluating oral comminution, the methodological suitability of these has not been completely studied. The aim of this study was to determine the reproducibility, reliability, and validity of a test food based on fuchsin beads for masticatory function assessment. Masticatory performance was evaluated in 20 dentate subjects (mean age, 23.3 years) using two kinds of test foods and methods: fuchsin beads and ultraviolet-visible spectrophotometry, and silicone cubes and multiple sieving as gold standard. Three examiners conducted five masticatory performance trials with each test food. Reproducibility of the results from both test foods was separately assessed using the intraclass correlation coefficient (ICC). Reliability and validity of fuchsin bead data were measured by comparing the average mean of absolute differences and the measurement means, respectively, regarding silicone cube data using the paired Student's t-test (α = 0.05). Intraexaminer and interexaminer ICC for the fuchsin bead values were 0.65 and 0.76 (p < 0.001), respectively; those for the silicone cubes values were 0.93 and 0.91 (p < 0.001), respectively. Reliability revealed intraexaminer (p < 0.001) and interexaminer (p < 0.05) differences between the average means of absolute differences of each test foods. Validity also showed differences between the measurement means of each test food (p < 0.001). Intra- and interexaminer reproducibility of the test food based on fuchsin beads for evaluation of masticatory performance were good and excellent, respectively; however, the reliability and validity were low, because fuchsin beads do not measure the grinding capacity of masticatory function as silicone cubes do; instead, this test food describes the crushing potential of teeth. Thus, the two kinds of test foods evaluate different properties of masticatory capacity, confirming fushsin

  15. Development, construct validity and test-retest reliability of a field-based wheelchair mobility performance test for wheelchair basketball.

    PubMed

    de Witte, Annemarie M H; Hoozemans, Marco J M; Berger, Monique A M; van der Slikke, Rienk M A; van der Woude, Lucas H V; Veeger, Dirkjan H E J

    2018-01-01

    The aim of this study was to develop and describe a wheelchair mobility performance test in wheelchair basketball and to assess its construct validity and reliability. To mimic mobility performance of wheelchair basketball matches in a standardised manner, a test was designed based on observation of wheelchair basketball matches and expert judgement. Forty-six players performed the test to determine its validity and 23 players performed the test twice for reliability. Independent-samples t-tests were used to assess whether the times needed to complete the test were different for classifications, playing standards and sex. Intraclass correlation coefficients (ICC) were calculated to quantify reliability of performance times. Males performed better than females (P < 0.001, effect size [ES] = -1.26) and international men performed better than national men (P < 0.001, ES = -1.62). Performance time of low (≤2.5) and high (≥3.0) classification players was borderline not significant with a moderate ES (P = 0.06, ES = 0.58). The reliability was excellent for overall performance time (ICC = 0.95). These results show that the test can be used as a standardised mobility performance test to validly and reliably assess the capacity in mobility performance of elite wheelchair basketball athletes. Furthermore, the described methodology of development is recommended for use in other sports to develop sport-specific tests.

  16. Initial performance of the COSINE-100 experiment

    NASA Astrophysics Data System (ADS)

    Adhikari, G.; Adhikari, P.; de Souza, E. Barbosa; Carlin, N.; Choi, S.; Choi, W. Q.; Djamal, M.; Ezeribe, A. C.; Ha, C.; Hahn, I. S.; Hubbard, A. J. F.; Jeon, E. J.; Jo, J. H.; Joo, H. W.; Kang, W. G.; Kang, W.; Kauer, M.; Kim, B. H.; Kim, H.; Kim, H. J.; Kim, K. W.; Kim, M. C.; Kim, N. Y.; Kim, S. K.; Kim, Y. D.; Kim, Y. H.; Kudryavtsev, V. A.; Lee, H. S.; Lee, J.; Lee, J. Y.; Lee, M. H.; Leonard, D. S.; Lim, K. E.; Lynch, W. A.; Maruyama, R. H.; Mouton, F.; Olsen, S. L.; Park, H. K.; Park, H. S.; Park, J. S.; Park, K. S.; Pettus, W.; Pierpoint, Z. P.; Prihtiadi, H.; Ra, S.; Rogers, F. R.; Rott, C.; Scarff, A.; Spooner, N. J. C.; Thompson, W. G.; Yang, L.; Yong, S. H.

    2018-02-01

    COSINE is a dark matter search experiment based on an array of low background NaI(Tl) crystals located at the Yangyang underground laboratory. The assembly of COSINE-100 was completed in the summer of 2016 and the detector is currently collecting physics quality data aimed at reproducing the DAMA/LIBRA experiment that reported an annual modulation signal. Stable operation has been achieved and will continue for at least 2 years. Here, we describe the design of COSINE-100, including the shielding arrangement, the configuration of the NaI(Tl) crystal detection elements, the veto systems, and the associated operational systems, and we show the current performance of the experiment.

  17. Validating the Use of pPerformance Risk Indices for System-Level Risk and Maturity Assessments

    NASA Astrophysics Data System (ADS)

    Holloman, Sherrica S.

    With pressure on the U.S. Defense Acquisition System (DAS) to reduce cost overruns and schedule delays, system engineers' performance is only as good as their tools. Recent literature details a need for 1) objective, analytical risk quantification methodologies over traditional subjective qualitative methods -- such as, expert judgment, and 2) mathematically rigorous system-level maturity assessments. The Mahafza, Componation, and Tippett (2005) Technology Performance Risk Index (TPRI) ties the assessment of technical performance to the quantification of risk of unmet performance; however, it is structured for component- level data as input. This study's aim is to establish a modified TPRI with systems-level data as model input, and then validate the modified index with actual system-level data from the Department of Defense's (DoD) Major Defense Acquisition Programs (MDAPs). This work's contribution is the establishment and validation of the System-level Performance Risk Index (SPRI). With the introduction of the SPRI, system-level metrics are better aligned, allowing for better assessment, tradeoff and balance of time, performance and cost constraints. This will allow system engineers and program managers to ultimately make better-informed system-level technical decisions throughout the development phase.

  18. A cylindrical tripleGEM detector for the BESIII experiment: Measurement of the performance in a magnetic field and project status

    NASA Astrophysics Data System (ADS)

    Farinelli, R.; BESIII CGEM Group

    2017-01-01

    A new cylindrical GEM detector is under development to upgrade the tracking system of the BESIII experiment at the IHEP in Beijing. The new detector will replace the current inner drift chamber of the experiment in order to increase significantly the spatial resolution along the beam direction (σ_z ˜ 300 μ m) and to grant the performance of momentum resolution (σ_{p_t}/p_t ˜ 0.5% at 1GeV) and spatial resolution (σ_{xy} ˜ 130 μ m). A cylindrical prototype with the final detector dimensions has been built and the assembly procedure has been successfully validated. Moreover the performance of a 10 × 10 cm ^2 planar GEM has been studied inside a magnetic field by means of a beam test at CERN. The data have been analyzed using two different readout mode: the charge centroid (CC) and the micro time projection chamber ( μ TPC) method.

  19. Medical performance and the 'inaccessible' experience of illness: an exploratory study.

    PubMed

    Weitkamp, Emma; Mermikides, Alex

    2016-09-01

    We report a survey of audience members' responses (147 questionnaires collected at seven performances) and 10 in-depth interviews (five former patients and two family members, three medical practitioners) to bloodlines, a medical performance exploring the experience of haematopoietic stem-cell transplant as treatment for acute leukaemia. Performances took place in 2014 and 2015. The article argues that performances that are created through interdisciplinary collaboration can convey otherwise 'inaccessible' illness experiences in ways that audience members with personal experience recognise as familiar, and find emotionally affecting. In particular such performances are adept at interweaving 'objectivist' (objective, medical) and 'subjectivist' (subjective, emotional) perspectives of the illness experience, and indeed, at challenging such distinctions. We suggest that reflecting familiar yet hard-to-articulate experiences may be beneficial for the ongoing emotional recovery of people who have survived serious disease, particularly in relation to the isolation that they experience during and as a consequence of their treatment. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  20. Assessing positive and negative experiences: validation of a new measure of well-being in an Italian population.

    PubMed

    Corno, Giulia; Molinari, Guadalupe; Baños, Rosa Maria

    2016-01-01

    The aim of this study is to explore the psychometric properties of an affect scale, the Scale of Positive and Negative Experience (SPANE), in an Italian-speaking population. The results of this study demonstrate that the Italian version of the SPANE has psychometric properties similar to those shown by the original and previous versions, and it presents satisfactory reliability and factorial validity. The results of the Confirmatory Factor Analysis support the expected two-factor structure, positive and negative feeling, which characterized the previous versions. As expected, measures of negative affect, anxiety, negative future expectances, and depression correlated positively with the negative experiences SPANE subscale, and negatively with the positive experiences SPANE subscale. Results of this study demonstrate that the Italian version of the SPANE has psychometric properties similar to those shown by the original and previous versions, and it presents satisfactory reliability and factorial validity. The use of this instrument provides clinically useful information about a person’s overall emotional experience and it is an indicator of well-being. Although further studies are required to confirm the psychometric characteristics of the scale, the SPANE Italian version is expected to improve theoretical and empirical research on the well-being of the Italian population.

  1. Toward Supersonic Retropropulsion CFD Validation

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Schauerhamer, D. Guy; Trumble, Kerry; Sozer, Emre; Barnhardt, Michael; Carlson, Jan-Renee; Edquist, Karl

    2011-01-01

    This paper begins the process of verifying and validating computational fluid dynamics (CFD) codes for supersonic retropropulsive flows. Four CFD codes (DPLR, FUN3D, OVERFLOW, and US3D) are used to perform various numerical and physical modeling studies toward the goal of comparing predictions with a wind tunnel experiment specifically designed to support CFD validation. Numerical studies run the gamut in rigor from code-to-code comparisons to observed order-of-accuracy tests. Results indicate that this complex flowfield, involving time-dependent shocks and vortex shedding, design order of accuracy is not clearly evident. Also explored is the extent of physical modeling necessary to predict the salient flowfield features found in high-speed Schlieren images and surface pressure measurements taken during the validation experiment. Physical modeling studies include geometric items such as wind tunnel wall and sting mount interference, as well as turbulence modeling that ranges from a RANS (Reynolds-Averaged Navier-Stokes) 2-equation model to DES (Detached Eddy Simulation) models. These studies indicate that tunnel wall interference is minimal for the cases investigated; model mounting hardware effects are confined to the aft end of the model; and sparse grid resolution and turbulence modeling can damp or entirely dissipate the unsteadiness of this self-excited flow.

  2. Aircraft Wake Vortex Spacing System (AVOSS) Performance Update and Validation Study

    NASA Technical Reports Server (NTRS)

    Rutishauser, David K.; OConnor, Cornelius J.

    2001-01-01

    An analysis has been performed on data generated from the two most recent field deployments of the Aircraft Wake VOrtex Spacing System (AVOSS). The AVOSS provides reduced aircraft spacing criteria for wake vortex avoidance as compared to the FAA spacing applied under Instrument Flight Rules (IFR). Several field deployments culminating in a system demonstration at Dallas Fort Worth (DFW) International Airport in the summer of 2000 were successful in showing a sound operational concept and the system's potential to provide a significant benefit to airport operations. For DFW, a predicted average throughput increase of 6% was observed. This increase implies 6 or 7 more aircraft on the ground in a one-hour period for DFW operations. Several studies of performance correlations to system configuration options, design options, and system inputs are also reported. The studies focus on the validation performance of the system.

  3. Validation of Helicopter Gear Condition Indicators Using Seeded Fault Tests

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula; Brandon, E. Bruce

    2013-01-01

    A "seeded fault test" in support of a rotorcraft condition based maintenance program (CBM), is an experiment in which a component is tested with a known fault while health monitoring data is collected. These tests are performed at operating conditions comparable to operating conditions the component would be exposed to while installed on the aircraft. Performance of seeded fault tests is one method used to provide evidence that a Health Usage Monitoring System (HUMS) can replace current maintenance practices required for aircraft airworthiness. Actual in-service experience of the HUMS detecting a component fault is another validation method. This paper will discuss a hybrid validation approach that combines in service-data with seeded fault tests. For this approach, existing in-service HUMS flight data from a naturally occurring component fault will be used to define a component seeded fault test. An example, using spiral bevel gears as the targeted component, will be presented. Since the U.S. Army has begun to develop standards for using seeded fault tests for HUMS validation, the hybrid approach will be mapped to the steps defined within their Aeronautical Design Standard Handbook for CBM. This paper will step through their defined processes, and identify additional steps that may be required when using component test rig fault tests to demonstrate helicopter CI performance. The discussion within this paper will provide the reader with a better appreciation for the challenges faced when defining a seeded fault test for HUMS validation.

  4. CRYOSAT-2: POST Launch Performance of SIRAL-2 and its Calibration/validation

    NASA Astrophysics Data System (ADS)

    Cullen, Robert

    1. INTRODUCTION The main payload of CryoSat-2 [1], SIRAL (Synthetic interferometric radar altimeter), is a Ku band pulse-width limited radar altimeter which transmits pulses at a high pulse repetition frequency thus making received echoes phase coherent and suitable for azimuth processing [2]. The azimuth processing in conjunction with correction for slant range improves along track resolution to about 250 meters which is a significant improvement over traditional pulse-width limited systems such as Envisat RA-2, [3]. CryoSat-2 will be launched on 25th February 2010 and this paper describes the pre and post launch measures of CryoSat/SIRAL performance and the status of mission validation planning. 2. SIRAL PERFORMANCE: INTERNAL AND EXTERNAL CALIBRATION Phase coherent pulse-width limited radar altimeters such as SIRAL-2 pose a new challenge when considering a strategy for calibration. Along with the need to generate the well under-stood corrections for transfer function amplitude with respect to frequency, gain and instrument path delay there is also a need to provide corrections for transfer function phase with respect to frequency and AGC setting, phase variation across bursts of pulses. Furthermore, since some components of these radars are temperature sensitive one needs to be careful when the decid-ing how often calibrations are performed whilst not impacting mission performance. Several internal calibration ground processors have been developed to model imperfections within the CryoSat-2 radar altimeter (SIRAL-2) hardware and reduce their effect from the science data stream via the use of calibration correction auxiliary products within the ground segment. We present the methods and results used to model and remove imperfections and describe the baseline for usage of SIRAL-2 calibration modes during the commissioning phase and the op-erational exploitation phases of the mission. Additionally we present early results derived from external calibration of SIRAL

  5. Reliability and validity of the test of incremental respiratory endurance measures of inspiratory muscle performance in COPD.

    PubMed

    Formiga, Magno F; Roach, Kathryn E; Vital, Isabel; Urdaneta, Gisel; Balestrini, Kira; Calderon-Candelario, Rafael A; Campos, Michael A; Cahalin, Lawrence P

    2018-01-01

    The Test of Incremental Respiratory Endurance (TIRE) provides a comprehensive assessment of inspiratory muscle performance by measuring maximal inspiratory pressure (MIP) over time. The integration of MIP over inspiratory duration (ID) provides the sustained maximal inspiratory pressure (SMIP). Evidence on the reliability and validity of these measurements in COPD is not currently available. Therefore, we assessed the reliability, responsiveness and construct validity of the TIRE measures of inspiratory muscle performance in subjects with COPD. Test-retest reliability, known-groups and convergent validity assessments were implemented simultaneously in 81 male subjects with mild to very severe COPD. TIRE measures were obtained using the portable PrO2 device, following standard guidelines. All TIRE measures were found to be highly reliable, with SMIP demonstrating the strongest test-retest reliability with a nearly perfect intraclass correlation coefficient (ICC) of 0.99, while MIP and ID clustered closely together behind SMIP with ICC values of about 0.97. Our findings also demonstrated known-groups validity of all TIRE measures, with SMIP and ID yielding larger effect sizes when compared to MIP in distinguishing between subjects of different COPD status. Finally, our analyses confirmed convergent validity for both SMIP and ID, but not MIP. The TIRE measures of MIP, SMIP and ID have excellent test-retest reliability and demonstrated known-groups validity in subjects with COPD. SMIP and ID also demonstrated evidence of moderate convergent validity and appear to be more stable measures in this patient population than the traditional MIP.

  6. Retrieval and validation of carbon dioxide, methane and water vapor for the Canary Islands IR-laser occultation experiment

    NASA Astrophysics Data System (ADS)

    Proschek, V.; Kirchengast, G.; Schweitzer, S.; Brooke, J. S. A.; Bernath, P. F.; Thomas, C. B.; Wang, J.-G.; Tereszchuk, K. A.; González Abad, G.; Hargreaves, R. J.; Beale, C. A.; Harrison, J. J.; Martin, P. A.; Kasyutich, V. L.; Gerbig, C.; Kolle, O.; Loescher, A.

    2014-11-01

    The first ground-based experiment to prove the concept of a novel space-based observation technique for microwave and infrared-laser occultation between Low Earth Orbit satellites (LMIO) was performed in the Canary Islands between La Palma and Tenerife in July 2011. This experiment aimed to demonstrate the infrared-laser differential transmission principle for the measurement of greenhouse gases (GHGs) in the free atmosphere. Such global and long-term stable measurements of GHGs, accompanied also by measurements of thermodynamic parameters and line-of-sight wind in a self-calibrating way, have become very important for climate change monitoring. The experiment delivered promising initial data for demonstrating the new observation concept by retrieving volume mixing ratios of GHGs along a ~ 144 km signal path at altitudes of ~ 2.4 km. Here, we present a detailed analysis of the measurements, following a recent publication that introduced the experiment's technical setup and first results for an example retrieval of CO2. We present the observational and validation datasets, the latter simultaneously measured at the transmitter and receiver sites, the measurement data handling, and the differential transmission retrieval procedure. We also determine the individual and combined uncertainties influencing the results and present the retrieval results for 12CO2, 13CO2, C18OO, H2O and CH4. The new method is found to have a reliable basis for monitoring of greenhouse gases such as CO2, CH4, and H2O in the free atmosphere.

  7. Experiences using IAEA Code of practice for radiation sterilization of tissue allografts: Validation and routine control

    NASA Astrophysics Data System (ADS)

    Hilmy, N.; Febrida, A.; Basril, A.

    2007-11-01

    Problems of tissue allografts in using International Standard (ISO) 11137 for validation of radiation sterilization dose (RSD) are limited and low numbers of uniform samples per production batch, those are products obtained from one donor. Allograft is a graft transplanted between two different individuals of the same species. The minimum number of uniform samples needed for verification dose (VD) experiment at the selected sterility assurance level (SAL) per production batch according to the IAEA Code is 20, i.e., 10 for bio-burden determination and the remaining 10 for sterilization test. Three methods of the IAEA Code have been used for validation of RSD, i.e., method A1 that is a modification of method 1 of ISO 11137:1995, method B (ISO 13409:1996), and method C (AAMI TIR 27:2001). This paper describes VD experiments using uniform products obtained from one cadaver donor, i.e., cancellous bones, demineralized bone powders and amnion grafts from one life donor. Results of the verification dose experiments show that RSD is 15.4 kGy for cancellous and demineralized bone grafts and 19.2 kGy for amnion grafts according to method A1 and 25 kGy according to methods B and C.

  8. Validation of a Performance Assessment Instrument in Problem-Based Learning Tutorials Using Two Cohorts of Medical Students

    ERIC Educational Resources Information Center

    Lee, Ming; Wimmers, Paul F.

    2016-01-01

    Although problem-based learning (PBL) has been widely used in medical schools, few studies have attended to the assessment of PBL processes using validated instruments. This study examined reliability and validity for an instrument assessing PBL performance in four domains: Problem Solving, Use of Information, Group Process, and Professionalism.…

  9. Design of a Hydro-Turbine Blade for Acoustic and Performance Validation Studies

    NASA Astrophysics Data System (ADS)

    Johnson, E.; Barone, M.

    2011-12-01

    To meet the growing, global energy demands governments and industry have recently begun to focus on marine hydrokinetic (MHK) devices as an additional form of power generation. Water turbines have become a popular design choice since they are able to leverage experience from the decades-old wind industry in the hope of decreasing time-to-market. However, the difference in environments poses challenges that need to be addressed. In particular, little research has addressed the acoustic effects of common aerofoils in a marine setting. This has both a potential impact on marine life and may cause early fatigue by exciting new structural modes. An initial blade design is presented, which has been used to begin characterization of any structural and acoustic issues that may arise from a direct one-to-one swap of wind technologies into MHK devices. The blade was optimized for performance using blade-element momentum theory while requiring that it not exceed the allowable stress under a specified extreme operating design condition. This limited the maximum power generated, while ensuring a realizable blade. A stress analysis within ANSYS was performed to validate the structural integrity of the design. Additionally, predictions of the radiated noise from the MHK rotor will be made using boundary element modeling based on flow results from ANSYS CFX, a computational fluid dynamics (CFD) code. The FEA and CFD results demonstrate good comparison to the expected design. Determining a range for the anticipated noise produced from a MHK turbine provides a look at the environmental impact these devices will have. Future efforts will focus on the design constraints noise generation places on MHK devices.

  10. Validating Human Performance Models of the Future Orion Crew Exploration Vehicle

    NASA Technical Reports Server (NTRS)

    Wong, Douglas T.; Walters, Brett; Fairey, Lisa

    2010-01-01

    NASA's Orion Crew Exploration Vehicle (CEV) will provide transportation for crew and cargo to and from destinations in support of the Constellation Architecture Design Reference Missions. Discrete Event Simulation (DES) is one of the design methods NASA employs for crew performance of the CEV. During the early development of the CEV, NASA and its prime Orion contractor Lockheed Martin (LM) strived to seek an effective low-cost method for developing and validating human performance DES models. This paper focuses on the method developed while creating a DES model for the CEV Rendezvous, Proximity Operations, and Docking (RPOD) task to the International Space Station. Our approach to validation was to attack the problem from several fronts. First, we began the development of the model early in the CEV design stage. Second, we adhered strictly to M&S development standards. Third, we involved the stakeholders, NASA astronauts, subject matter experts, and NASA's modeling and simulation development community throughout. Fourth, we applied standard and easy-to-conduct methods to ensure the model's accuracy. Lastly, we reviewed the data from an earlier human-in-the-loop RPOD simulation that had different objectives, which provided us an additional means to estimate the model's confidence level. The results revealed that a majority of the DES model was a reasonable representation of the current CEV design.

  11. Medical performance and the ‘inaccessible’ experience of illness: an exploratory study

    PubMed Central

    Weitkamp, Emma; Mermikides, Alex

    2016-01-01

    We report a survey of audience members' responses (147 questionnaires collected at seven performances) and 10 in-depth interviews (five former patients and two family members, three medical practitioners) to bloodlines, a medical performance exploring the experience of haematopoietic stem-cell transplant as treatment for acute leukaemia. Performances took place in 2014 and 2015. The article argues that performances that are created through interdisciplinary collaboration can convey otherwise ‘inaccessible’ illness experiences in ways that audience members with personal experience recognise as familiar, and find emotionally affecting. In particular such performances are adept at interweaving ‘objectivist’ (objective, medical) and ‘subjectivist’ (subjective, emotional) perspectives of the illness experience, and indeed, at challenging such distinctions. We suggest that reflecting familiar yet hard-to-articulate experiences may be beneficial for the ongoing emotional recovery of people who have survived serious disease, particularly in relation to the isolation that they experience during and as a consequence of their treatment. PMID:27466255

  12. Skylab experiment performance evaluation manual. Appendix J: Experiment M555 gallium arsenide single crystal growth (MSFC)

    NASA Technical Reports Server (NTRS)

    Byers, M. S.

    1973-01-01

    Analyses for Experiment M555, Gallium Arsenide Single Crystal Growth (MSFC), to be used for evaluating the performance of the Skylab corollary experiments under preflight, inflight, and post-flight conditions are presented. Experiment contingency plan workaround procedure and malfunction analyses are presented in order to assist in making the experiment operationally successful.

  13. Use of integral experiments in support to the validation of JEFF-3.2 nuclear data evaluation

    NASA Astrophysics Data System (ADS)

    Leclaire, Nicolas; Cochet, Bertrand; Jinaphanh, Alexis; Haeck, Wim

    2017-09-01

    For many years now, IRSN has developed its own Monte Carlo continuous energy capability, which allows testing various nuclear data libraries. In that prospect, a validation database of 1136 experiments was built from cases used for the validation of the APOLLO2-MORET 5 multigroup route of the CRISTAL V2.0 package. In this paper, the keff obtained for more than 200 benchmarks using the JEFF-3.1.1 and JEFF-3.2 libraries are compared to benchmark keff values and main discrepancies are analyzed regarding the neutron spectrum. Special attention is paid on benchmarks for which the results have been highly modified between both JEFF-3 versions.

  14. A Unified Model of Performance: Validation of its Predictions across Different Sleep/Wake Schedules.

    PubMed

    Ramakrishnan, Sridhar; Wesensten, Nancy J; Balkin, Thomas J; Reifman, Jaques

    2016-01-01

    Historically, mathematical models of human neurobehavioral performance developed on data from one sleep study were limited to predicting performance in similar studies, restricting their practical utility. We recently developed a unified model of performance (UMP) to predict the effects of the continuum of sleep loss-from chronic sleep restriction (CSR) to total sleep deprivation (TSD) challenges-and validated it using data from two studies of one laboratory. Here, we significantly extended this effort by validating the UMP predictions across a wide range of sleep/wake schedules from different studies and laboratories. We developed the UMP on psychomotor vigilance task (PVT) lapse data from one study encompassing four different CSR conditions (7 d of 3, 5, 7, and 9 h of sleep/night), and predicted performance in five other studies (from four laboratories), including different combinations of TSD (40 to 88 h), CSR (2 to 6 h of sleep/night), control (8 to 10 h of sleep/night), and nap (nocturnal and diurnal) schedules. The UMP accurately predicted PVT performance trends across 14 different sleep/wake conditions, yielding average prediction errors between 7% and 36%, with the predictions lying within 2 standard errors of the measured data 87% of the time. In addition, the UMP accurately predicted performance impairment (average error of 15%) for schedules (TSD and naps) not used in model development. The unified model of performance can be used as a tool to help design sleep/wake schedules to optimize the extent and duration of neurobehavioral performance and to accelerate recovery after sleep loss. © 2016 Associated Professional Sleep Societies, LLC.

  15. The construct and criterion validity of the multi-source feedback process to assess physician performance: a meta-analysis

    PubMed Central

    Al Ansari, Ahmed; Donnon, Tyrone; Al Khalifa, Khalid; Darwish, Abdulla; Violato, Claudio

    2014-01-01

    Background The purpose of this study was to conduct a meta-analysis on the construct and criterion validity of multi-source feedback (MSF) to assess physicians and surgeons in practice. Methods In this study, we followed the guidelines for the reporting of observational studies included in a meta-analysis. In addition to PubMed and MEDLINE databases, the CINAHL, EMBASE, and PsycINFO databases were searched from January 1975 to November 2012. All articles listed in the references of the MSF studies were reviewed to ensure that all relevant publications were identified. All 35 articles were independently coded by two authors (AA, TD), and any discrepancies (eg, effect size calculations) were reviewed by the other authors (KA, AD, CV). Results Physician/surgeon performance measures from 35 studies were identified. A random-effects model of weighted mean effect size differences (d) resulted in: construct validity coefficients for the MSF system on physician/surgeon performance across different levels in practice ranged from d=0.14 (95% confidence interval [CI] 0.40–0.69) to d=1.78 (95% CI 1.20–2.30); construct validity coefficients for the MSF on physician/surgeon performance on two different occasions ranged from d=0.23 (95% CI 0.13–0.33) to d=0.90 (95% CI 0.74–1.10); concurrent validity coefficients for the MSF based on differences in assessor group ratings ranged from d=0.50 (95% CI 0.47–0.52) to d=0.57 (95% CI 0.55–0.60); and predictive validity coefficients for the MSF on physician/surgeon performance across different standardized measures ranged from d=1.28 (95% CI 1.16–1.41) to d=1.43 (95% CI 0.87–2.00). Conclusion The construct and criterion validity of the MSF system is supported by small to large effect size differences based on the MSF process and physician/surgeon performance across different clinical and nonclinical domain measures. PMID:24600300

  16. Reliability, validity and description of timed performance of the Jebsen-Taylor Test in patients with muscular dystrophies.

    PubMed

    Artilheiro, Mariana Cunha; Fávero, Francis Meire; Caromano, Fátima Aparecida; Oliveira, Acary de Souza Bulle; Carvas, Nelson; Voos, Mariana Callil; Sá, Cristina Dos Santos Cardoso de

    2017-12-08

    The Jebsen-Taylor Test evaluates upper limb function by measuring timed performance on everyday activities. The test is used to assess and monitor the progression of patients with Parkinson disease, cerebral palsy, stroke and brain injury. To analyze the reliability, internal consistency and validity of the Jebsen-Taylor Test in people with Muscular Dystrophy and to describe and classify upper limb timed performance of people with Muscular Dystrophy. Fifty patients with Muscular Dystrophy were assessed. Non-dominant and dominant upper limb performances on the Jebsen-Taylor Test were filmed. Two raters evaluated timed performance for inter-rater reliability analysis. Test-retest reliability was investigated by using intraclass correlation coefficients. Internal consistency was assessed using the Cronbach alpha. Construct validity was conducted by comparing the Jebsen-Taylor Test with the Performance of Upper Limb. The internal consistency of Jebsen-Taylor Test was good (Cronbach's α=0.98). A very high inter-rater reliability (0.903-0.999), except for writing with an Intraclass correlation coefficient of 0.772-1.000. Strong correlations between the Jebsen-Taylor Test and the Performance of Upper Limb Module were found (rho=-0.712). The Jebsen-Taylor Test is a reliable and valid measure of timed performance for people with Muscular Dystrophy. Copyright © 2017 Associação Brasileira de Pesquisa e Pós-Graduação em Fisioterapia. Publicado por Elsevier Editora Ltda. All rights reserved.

  17. Development, Testing, and Validation of a Model-Based Tool to Predict Operator Responses in Unexpected Workload Transitions

    NASA Technical Reports Server (NTRS)

    Sebok, Angelia; Wickens, Christopher; Sargent, Robert

    2015-01-01

    One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.

  18. Development of Valid and Reliable Measures of Patient and Family Experiences of Hospice Care for Public Reporting.

    PubMed

    Anhang Price, Rebecca; Stucky, Brian; Parast, Layla; Elliott, Marc N; Haas, Ann; Bradley, Melissa; Teno, Joan M

    2018-03-20

    Increasingly, dying patients and their families have a choice of hospice providers. Care quality varies considerably across providers; informing consumers of these differences may help to improve their selection of hospices. To develop and evaluate standardized survey measures of hospice care experiences for the purpose of comparing and publicly reporting hospice performance. We assessed item performance and constructed composite measures by factor analysis, evaluating item-scale correlations and estimating reliability. To assess key drivers of overall experiences, we regressed overall rating and willingness to recommend the hospice on each composite. Data submitted by 2500 hospices participating in national implementation of the Consumer Assessment of Healthcare Providers and Systems (CAHPS ® ) Hospice Survey for April through September 2015. Composite measures of Hospice Team Communication, Getting Timely Care, Treating Family Member with Respect, Getting Emotional and Religious Support, Getting Help for Symptoms, and Getting Hospice Care Training. Cronbach's alpha estimates for the composite measures range from 0.61 to 0.85; hospice-level reliability for the measures range from 0.67 to 0.81 assuming 200 completed surveys per hospice. Together, the composites are responsible for 48% of the variance in caregivers' overall ratings of hospices. Hospice Team Communication is the strongest predictor of overall rating of care. Our analyses provide evidence of the reliability and validity of CAHPS Hospice Survey measure scores. Results also highlight important opportunities to improve the quality of hospice care, particularly with regard to addressing symptoms of anxiety and sadness, discussing side effects of pain medicine, and keeping family informed of the patient's condition.

  19. Validity of linear encoder measurement of sit-to-stand performance power in older people.

    PubMed

    Lindemann, U; Farahmand, P; Klenk, J; Blatzonis, K; Becker, C

    2015-09-01

    To investigate construct validity of linear encoder measurement of sit-to-stand performance power in older people by showing associations with relevant functional performance and physiological parameters. Cross-sectional study. Movement laboratory of a geriatric rehabilitation clinic. Eighty-eight community-dwelling, cognitively unimpaired older women (mean age 78 years). Sit-to-stand performance power and leg power were assessed using a linear encoder and the Nottingham Power Rig, respectively. Gait speed was measured on an instrumented walkway. Maximum quadriceps and hand grip strength were assessed using dynamometers. Mid-thigh muscle cross-sectional area of both legs was measured using magnetic resonance imaging. Associations of sit-to-stand performance power with power assessed by the Nottingham Power Rig, maximum gait speed and muscle cross-sectional area were r=0.646, r=0.536 and r=0.514, respectively. A linear regression model explained 50% of the variance in sit-to-stand performance power including muscle cross-sectional area (p=0.001), maximum gait speed (p=0.002), and power assessed by the Nottingham Power Rig (p=0.006). Construct validity of linear encoder measurement of sit-to-stand power was shown at functional level and morphological level for older women. This measure could be used in routine clinical practice as well as in large-scale studies. DRKS00003622. Copyright © 2015 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  20. MRPrimerW: a tool for rapid design of valid high-quality primers for multiple target qPCR experiments.

    PubMed

    Kim, Hyerin; Kang, NaNa; An, KyuHyeon; Koo, JaeHyung; Kim, Min-Soo

    2016-07-08

    Design of high-quality primers for multiple target sequences is essential for qPCR experiments, but is challenging due to the need to consider both homology tests on off-target sequences and the same stringent filtering constraints on the primers. Existing web servers for primer design have major drawbacks, including requiring the use of BLAST-like tools for homology tests, lack of support for ranking of primers, TaqMan probes and simultaneous design of primers against multiple targets. Due to the large-scale computational overhead, the few web servers supporting homology tests use heuristic approaches or perform homology tests within a limited scope. Here, we describe the MRPrimerW, which performs complete homology testing, supports batch design of primers for multi-target qPCR experiments, supports design of TaqMan probes and ranks the resulting primers to return the top-1 best primers to the user. To ensure high accuracy, we adopted the core algorithm of a previously reported MapReduce-based method, MRPrimer, but completely redesigned it to allow users to receive query results quickly in a web interface, without requiring a MapReduce cluster or a long computation. MRPrimerW provides primer design services and a complete set of 341 963 135 in silico validated primers covering 99% of human and mouse genes. Free access: http://MRPrimerW.com. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Reliability and validity of the Performance Recorder 1 for measuring isometric knee flexor and extensor strength.

    PubMed

    Neil, Sarah E; Myring, Alec; Peeters, Mon Jef; Pirie, Ian; Jacobs, Rachel; Hunt, Michael A; Garland, S Jayne; Campbell, Kristin L

    2013-11-01

    Muscular strength is a key parameter of rehabilitation programs and a strong predictor of functional capacity. Traditional methods to measure strength, such as manual muscle testing (MMT) and hand-held dynamometry (HHD), are limited by the strength and experience of the tester. The Performance Recorder 1 (PR1) is a strength assessment tool attached to resistance training equipment and may be a time- and cost-effective tool to measure strength in clinical practice that overcomes some limitations of MMT and HHD. However, reliability and validity of the PR1 have not been reported. Test-retest and inter-rater reliability was assessed using the PR1 in healthy adults (n  =  15) during isometric knee flexion and extension. Criterion-related validity was assessed through comparison of values obtained from the PR1 and Biodex® isokinetic dynamometer. Test-retest reliability was excellent for peak knee flexion (intra-class correlation coefficient [ICC] of 0.96, 95% CI: 0.85, 0.99) and knee extension (ICC  =  0.96, 95% CI: 0.87, 0.99). Inter-rater reliability was also excellent for peak knee flexion (ICC  =  0.95, 95% CI: 0.85, 0.99) and peak knee extension (ICC  =  0.97, 95% CI: 0.91, 0.99). Validity was moderate for peak knee flexion (ICC  =  0.75, 95% CI: 0.38, 0.92) but poor for peak knee extension (ICC  =  0.37, 95% CI: 0, 0.73). The PR1 provides a reliable measure of isometric knee flexor and extensor strength in healthy adults that could be used in the clinical setting, but absolute values may not be comparable to strength assessment by gold-standard measures.

  2. A Comprehensive Validation Approach Using The RAVEN Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua J

    2015-06-01

    The RAVEN computer code , developed at the Idaho National Laboratory, is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. RAVEN is a multi-purpose probabilistic and uncertainty quantification platform, capable to communicate with any system code. A natural extension of the RAVEN capabilities is the imple- mentation of an integrated validation methodology, involving several different metrics, that represent an evolution of the methods currently used in the field. The state-of-art vali- dation approaches use neither exploration of the input space through sampling strategies, nor a comprehensive variety of metrics neededmore » to interpret the code responses, with respect experimental data. The RAVEN code allows to address both these lacks. In the following sections, the employed methodology, and its application to the newer developed thermal-hydraulic code RELAP-7, is reported.The validation approach has been applied on an integral effect experiment, representing natu- ral circulation, based on the activities performed by EG&G Idaho. Four different experiment configurations have been considered and nodalized.« less

  3. Motor Sensory Performance - Skylab Student Experiment ED-41

    NASA Technical Reports Server (NTRS)

    1973-01-01

    This chart describes the Skylab student experiment Motor Sensory Performance, proposed by Kathy L. Jackson of Houston, Texas. Her proposal was a very simple but effective test to measure the potential degradation of man's motor-sensory skills while weightless. Without knowing whether or not man can retain a high level of competency in the performance of various tasks after long exposure to weightlessness, this capability could not be fully known. Skylab, with its long-duration missions, provided an ideal testing situation. The experiment Kathy Jackson proposed was similar in application to the tasks involved in docking one spacecraft to another using manual control. It required one of the greatest tests of the motor-sensory capabilities of man. In March 1972, NASA and the National Science Teachers Association selected 25 experiment proposals for flight on Skylab. Science advisors from the Marshall Space Flight Center aided and assisted the students in developing the proposals for flight on Skylab.

  4. Reliability and validity of an accele-rometric system for assessing vertical jumping performance.

    PubMed

    Choukou, M-A; Laffaye, G; Taiar, R

    2014-03-01

    The validity of an accelerometric system (Myotest©) for assessing vertical jump height, vertical force and power, leg stiffness and reactivity index was examined. 20 healthy males performed 3×"5 hops in place", 3×"1 squat jump" and 3× "1 countermovement jump" during 2 test-retest sessions. The variables were simultaneously assessed using an accelerometer and a force platform at a frequency of 0.5 and 1 kHz, respectively. Both reliability and validity of the accelerometric system were studied. No significant differences between test and retest data were found (p < 0.05), showing a high level of reliability. Besides, moderate to high intraclass correlation coefficients (ICCs) (from 0.74 to 0.96) were obtained for all variables whereas weak to moderate ICCs (from 0.29 to 0.79) were obtained for force and power during the countermovement jump. With regards to validity, the difference between the two devices was not significant for 5 hops in place height (1.8 cm), force during squat (-1.4 N · kg(-1)) and countermovement (0.1 N · kg(-1)) jumps, leg stiffness (7.8 kN · m(-1)) and reactivity index (0.4). So, the measurements of these variables with this accelerometer are valid, which is not the case for the other variables. The main causes of non-validity for velocity, power and contact time assessment are temporal biases of the takeoff and touchdown moments detection.

  5. Apollo experience report. Crew-support activities for experiments performed during manned space flight

    NASA Technical Reports Server (NTRS)

    Mckee, J. W.

    1974-01-01

    Experiments are performed during manned space flights in an attempt to acquire knowledge that can advance science and technology or that can be applied to operational techniques for future space flights. A description is given of the procedures that the personnel who are directly assigned to the function of crew support at the NASA Lyndon B. Johnson Space Center use to prepare for and to conduct experiments during space flight.

  6. Reliability and validity of the test of incremental respiratory endurance measures of inspiratory muscle performance in COPD

    PubMed Central

    Formiga, Magno F; Roach, Kathryn E; Vital, Isabel; Urdaneta, Gisel; Balestrini, Kira; Calderon-Candelario, Rafael A

    2018-01-01

    Purpose The Test of Incremental Respiratory Endurance (TIRE) provides a comprehensive assessment of inspiratory muscle performance by measuring maximal inspiratory pressure (MIP) over time. The integration of MIP over inspiratory duration (ID) provides the sustained maximal inspiratory pressure (SMIP). Evidence on the reliability and validity of these measurements in COPD is not currently available. Therefore, we assessed the reliability, responsiveness and construct validity of the TIRE measures of inspiratory muscle performance in subjects with COPD. Patients and methods Test–retest reliability, known-groups and convergent validity assessments were implemented simultaneously in 81 male subjects with mild to very severe COPD. TIRE measures were obtained using the portable PrO2 device, following standard guidelines. Results All TIRE measures were found to be highly reliable, with SMIP demonstrating the strongest test–retest reliability with a nearly perfect intraclass correlation coefficient (ICC) of 0.99, while MIP and ID clustered closely together behind SMIP with ICC values of about 0.97. Our findings also demonstrated known-groups validity of all TIRE measures, with SMIP and ID yielding larger effect sizes when compared to MIP in distinguishing between subjects of different COPD status. Finally, our analyses confirmed convergent validity for both SMIP and ID, but not MIP. Conclusion The TIRE measures of MIP, SMIP and ID have excellent test–retest reliability and demonstrated known-groups validity in subjects with COPD. SMIP and ID also demonstrated evidence of moderate convergent validity and appear to be more stable measures in this patient population than the traditional MIP. PMID:29805255

  7. [Perception scales of validated food insecurity: the experience of the countries in Latin America and the Caribbean].

    PubMed

    Sperandio, Naiara; Morais, Dayane de Castro; Priore, Silvia Eloiza

    2018-02-01

    The scope of this systematic review was to compare the food insecurity scales validated and used in the countries in Latin America and the Caribbean, and analyze the methods used in validation studies. A search was conducted in the Lilacs, SciELO and Medline electronic databases. The publications were pre-selected by titles and abstracts, and subsequently by a full reading. Of the 16,325 studies reviewed, 14 were selected. Twelve validated scales were identified for the following countries: Venezuela, Brazil, Colombia, Bolivia, Ecuador, Costa Rica, Mexico, Haiti, the Dominican Republic, Argentina and Guatemala. Besides these, there is the Latin American and Caribbean scale, the scope of which is regional. The scales ranged from the standard reference used, number of questions and diagnosis of insecurity. The methods used by the studies for internal validation were calculation of Cronbach's alpha and the Rasch model; for external validation the authors calculated association and /or correlation with socioeconomic and food consumption variables. The successful experience of Latin America and the Caribbean in the development of national and regional scales can be an example for other countries that do not have this important indicator capable of measuring the phenomenon of food insecurity.

  8. Performance-based comparison of neonatal intubation training outcomes: simulator and live animal.

    PubMed

    Andreatta, Pamela B; Klotz, Jessica J; Dooley-Hash, Suzanne L; Hauptman, Joe G; Biddinger, Bea; House, Joseph B

    2015-02-01

    The purpose of this article was to establish psychometric validity evidence for competency assessment instruments and to evaluate the impact of 2 forms of training on the abilities of clinicians to perform neonatal intubation. To inform the development of assessment instruments, we conducted comprehensive task analyses including each performance domain associated with neonatal intubation. Expert review confirmed content validity. Construct validity was established using the instruments to differentiate between the intubation performance abilities of practitioners (N = 294) with variable experience (novice through expert). Training outcomes were evaluated using a quasi-experimental design to evaluate performance differences between 294 subjects randomly assigned to 1 of 2 training groups. The training intervention followed American Heart Association Pediatric Advanced Life Support and Neonatal Resuscitation Program protocols with hands-on practice using either (1) live feline or (2) simulated feline models. Performance assessment data were captured before and directly following the training. All data were analyzed using analysis of variance with repeated measures and statistical significance set at P < .05. Content validity, reliability, and consistency evidence were established for each assessment instrument. Construct validity for each assessment instrument was supported by significantly higher scores for subjects with greater levels of experience, as compared with those with less experience (P = .000). Overall, subjects performed significantly better in each assessment domain, following the training intervention (P = .000). After controlling for experience level, there were no significant differences among the cognitive, performance, and self-efficacy outcomes between clinicians trained with live animal model or simulator model. Analysis of retention scores showed that simulator trained subjects had significantly higher performance scores after 18 weeks (P = .01

  9. Development and validation of a web-based questionnaire for surveying the health and working conditions of high-performance marine craft populations.

    PubMed

    de Alwis, Manudul Pahansen; Lo Martire, Riccardo; Äng, Björn O; Garme, Karl

    2016-06-20

    High-performance marine craft crews are susceptible to various adverse health conditions caused by multiple interactive factors. However, there are limited epidemiological data available for assessment of working conditions at sea. Although questionnaire surveys are widely used for identifying exposures, outcomes and associated risks with high accuracy levels, until now, no validated epidemiological tool exists for surveying occupational health and performance in these populations. To develop and validate a web-based questionnaire for epidemiological assessment of occupational and individual risk exposure pertinent to the musculoskeletal health conditions and performance in high-performance marine craft populations. A questionnaire for investigating the association between work-related exposure, performance and health was initially developed by a consensus panel under four subdomains, viz. demography, lifestyle, work exposure and health and systematically validated by expert raters for content relevance and simplicity in three consecutive stages, each iteratively followed by a consensus panel revision. The item content validity index (I-CVI) was determined as the proportion of experts giving a rating of 3 or 4. The scale content validity index (S-CVI/Ave) was computed by averaging the I-CVIs for the assessment of the questionnaire as a tool. Finally, the questionnaire was pilot tested. The S-CVI/Ave increased from 0.89 to 0.96 for relevance and from 0.76 to 0.94 for simplicity, resulting in 36 items in the final questionnaire. The pilot test confirmed the feasibility of the questionnaire. The present study shows that the web-based questionnaire fulfils previously published validity acceptance criteria and is therefore considered valid and feasible for the empirical surveying of epidemiological aspects among high-performance marine craft crews and similar populations. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted

  10. Development and validation of a web-based questionnaire for surveying the health and working conditions of high-performance marine craft populations

    PubMed Central

    de Alwis, Manudul Pahansen; Lo Martire, Riccardo; Äng, Björn O; Garme, Karl

    2016-01-01

    Background High-performance marine craft crews are susceptible to various adverse health conditions caused by multiple interactive factors. However, there are limited epidemiological data available for assessment of working conditions at sea. Although questionnaire surveys are widely used for identifying exposures, outcomes and associated risks with high accuracy levels, until now, no validated epidemiological tool exists for surveying occupational health and performance in these populations. Aim To develop and validate a web-based questionnaire for epidemiological assessment of occupational and individual risk exposure pertinent to the musculoskeletal health conditions and performance in high-performance marine craft populations. Method A questionnaire for investigating the association between work-related exposure, performance and health was initially developed by a consensus panel under four subdomains, viz. demography, lifestyle, work exposure and health and systematically validated by expert raters for content relevance and simplicity in three consecutive stages, each iteratively followed by a consensus panel revision. The item content validity index (I-CVI) was determined as the proportion of experts giving a rating of 3 or 4. The scale content validity index (S-CVI/Ave) was computed by averaging the I-CVIs for the assessment of the questionnaire as a tool. Finally, the questionnaire was pilot tested. Results The S-CVI/Ave increased from 0.89 to 0.96 for relevance and from 0.76 to 0.94 for simplicity, resulting in 36 items in the final questionnaire. The pilot test confirmed the feasibility of the questionnaire. Conclusions The present study shows that the web-based questionnaire fulfils previously published validity acceptance criteria and is therefore considered valid and feasible for the empirical surveying of epidemiological aspects among high-performance marine craft crews and similar populations. PMID:27324717

  11. Ecological Development and Validation of a Music Performance Rating Scale for Five Instrument Families

    ERIC Educational Resources Information Center

    Wrigley, William J.; Emmerson, Stephen B.

    2013-01-01

    This study investigated ways to improve the quality of music performance evaluation in an effort to address the accountability imperative in tertiary music education. An enhanced scientific methodology was employed incorporating ecological validity and using recognized qualitative methods involving grounded theory and quantitative methods…

  12. Neurobehavioral performance and work experience in Florida farmworkers.

    PubMed

    Kamel, Freya; Rowland, Andrew S; Park, Lawrence P; Anger, W Kent; Baird, Donna D; Gladen, Beth C; Moreno, Tirso; Stallone, Lillian; Sandler, Dale P

    2003-11-01

    Farmworkers experience many work-related hazards, including exposure to neurotoxicants. We compared neurobehavioral performance of 288 farmworkers in central Florida who had done farm work for at least 1 month with 51 controls who had not. Most of the farmworkers had worked in one or more of three types of agriculture: ornamental ferns, nurseries, or citrus fruit. We collected information on farm work history in a structured interview and evaluated neurobehavioral performance using a battery of eight tests. Analyses were adjusted for established confounders including age, sex, education, and acculturation. Ever having done farm work was associated with poor performance on four tests--digit span [odds ratio (OR) = 1.90; 95% confidence interval (CI), 1.02-3.53], tapping (coefficient = 4.13; 95% CI, 0.00-8.27), Santa Ana test (coefficient = 1.34; 95% CI, 0.29-2.39), and postural sway (coefficient = 4.74; 95% CI, -2.20 to 11.7)--but had little effect on four others: symbol digit latency, vibrotactile threshold, visual contrast sensitivity, and grip strength. Associations with farm work were similar in magnitude to associations with personal characteristics such as age and sex. Longer duration of farm work was associated with worse performance. Associations with fern work were more consistent than associations with nursery or citrus work. Deficits related to the duration of work experience were seen in former as well as current farmworkers, and decreased performance was related to chronic exposure even in the absence of a history of pesticide poisoning. We conclude that long-term experience of farm work is associated with measurable deficits in cognitive and psychomotor function.

  13. A systematic review of the reliability and validity of discrete choice experiments in valuing non-market environmental goods.

    PubMed

    Rakotonarivo, O Sarobidy; Schaafsma, Marije; Hockley, Neal

    2016-12-01

    While discrete choice experiments (DCEs) are increasingly used in the field of environmental valuation, they remain controversial because of their hypothetical nature and the contested reliability and validity of their results. We systematically reviewed evidence on the validity and reliability of environmental DCEs from the past thirteen years (Jan 2003-February 2016). 107 articles met our inclusion criteria. These studies provide limited and mixed evidence of the reliability and validity of DCE. Valuation results were susceptible to small changes in survey design in 45% of outcomes reporting reliability measures. DCE results were generally consistent with those of other stated preference techniques (convergent validity), but hypothetical bias was common. Evidence supporting theoretical validity (consistency with assumptions of rational choice theory) was limited. In content validity tests, 2-90% of respondents protested against a feature of the survey, and a considerable proportion found DCEs to be incomprehensible or inconsequential (17-40% and 10-62% respectively). DCE remains useful for non-market valuation, but its results should be used with caution. Given the sparse and inconclusive evidence base, we recommend that tests of reliability and validity are more routinely integrated into DCE studies and suggest how this might be achieved. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Assessing decentering: validation, psychometric properties, and clinical usefulness of the Experiences Questionnaire in a Spanish sample.

    PubMed

    Soler, Joaquim; Franquesa, Alba; Feliu-Soler, Albert; Cebolla, Ausias; García-Campayo, Javier; Tejedor, Rosa; Demarzo, Marcelo; Baños, Rosa; Pascual, Juan Carlos; Portella, Maria J

    2014-11-01

    Decentering is defined as the ability to observe one's thoughts and feelings in a detached manner. The Experiences Questionnaire (EQ) is a self-report instrument that originally assessed decentering and rumination. The purpose of this study was to evaluate the psychometric properties of the Spanish version of EQ-Decentering and to explore its clinical usefulness. The 11-item EQ-Decentering subscale was translated into Spanish and psychometric properties were examined in a sample of 921 adult individuals, 231 with psychiatric disorders and 690 without. The subsample of nonpsychiatric participants was also split according to their previous meditative experience (meditative participants, n=341; and nonmeditative participants, n=349). Additionally, differences among these three subgroups were explored to determine clinical validity of the scale. Finally, EQ-Decentering was administered twice in a group of borderline personality disorder, before and after a 10-week mindfulness intervention. Confirmatory factor analysis indicated acceptable model fit, sbχ(2)=243.8836 (p<.001), CFI=.939, GFI=.936, SRMR=.040, and RMSEA=.06 (.060-.077), and psychometric properties were found to be satisfactory (reliability: Cronbach's α=.893; convergent validity: r>.46; and divergent validity: r<-.35). The scale detected changes in decentering after a 10-session intervention in mindfulness (t=-4.692, p<.00001). Differences among groups were significant (F=134.8, p<.000001), where psychiatric participants showed the lowest scores compared to nonpsychiatric meditative and nonmeditative participants. The Spanish version of the EQ-Decentering is a valid and reliable instrument to assess decentering either in clinical and nonclinical samples. In addition, the findings show that EQ-Decentering seems an adequate outcome instrument to detect changes after mindfulness-based interventions. Copyright © 2014. Published by Elsevier Ltd.

  15. Validation of an instrument to measure patients' experiences of medicine use: the Living with Medicines Questionnaire.

    PubMed

    Krska, Janet; Katusiime, Barbra; Corlett, Sarah A

    2017-01-01

    Medicine-related burden is an increasingly recognized concept, stemming from the rising tide of polypharmacy, which may impact on patient behaviors, including nonadherence. No instruments currently exist which specifically measure medicine-related burden. The Living with Medicines Questionnaire (LMQ) was developed for this purpose. This study validated the LMQ in a sample of adults using regular prescription medicines in the UK. Questionnaires were distributed in community pharmacies and public places in southeast England or online through UK health websites and social media. A total of 1,177 were returned: 507 (43.1%) from pharmacy distribution and 670 (56.9%) online. Construct validity was assessed by principal components analysis and item reduction undertaken on the original 60-item pool. Known-groups analysis assessed differences in mean total scores between participants using different numbers of medicines and between those who did or did not require assistance with medicine use. Internal consistency was assessed by Cronbach's alpha. Free-text comments were analyzed thematically to substantiate underlying dimensions. A 42-item, eight-factor structure comprising intercorrelated dimensions (patient-doctor relationships and communication about medicines, patient-pharmacist communication about medicines, interferences with daily life, practical difficulties, effectiveness, acceptance of medicine use, autonomy/control over medicines and concerns about medicine use) was derived, which explained 57.4% of the total variation. Six of the eight subscales had acceptable internal consistency (α>0.7). More positive experiences were observed among patients using eight or fewer medicines compared to nine or more, and those independent with managing/using their medicines versus those requiring assistance. Free-text comments, provided by almost a third of the respondents, supported the domains identified. The resultant LMQ-2 is a valid and reliable multidimensional measure of

  16. Finite element analysis of dental implants with validation: to what extent can we expect the model to predict biological phenomena? A literature review and proposal for classification of a validation process.

    PubMed

    Chang, Yuanhan; Tambe, Abhijit Anil; Maeda, Yoshinobu; Wada, Masahiro; Gonda, Tomoya

    2018-03-08

    A literature review of finite element analysis (FEA) studies of dental implants with their model validation process was performed to establish the criteria for evaluating validation methods with respect to their similarity to biological behavior. An electronic literature search of PubMed was conducted up to January 2017 using the Medical Subject Headings "dental implants" and "finite element analysis." After accessing the full texts, the context of each article was searched using the words "valid" and "validation" and articles in which these words appeared were read to determine whether they met the inclusion criteria for the review. Of 601 articles published from 1997 to 2016, 48 that met the eligibility criteria were selected. The articles were categorized according to their validation method as follows: in vivo experiments in humans (n = 1) and other animals (n = 3), model experiments (n = 32), others' clinical data and past literature (n = 9), and other software (n = 2). Validation techniques with a high level of sufficiency and efficiency are still rare in FEA studies of dental implants. High-level validation, especially using in vivo experiments tied to an accurate finite element method, needs to become an established part of FEA studies. The recognition of a validation process should be considered when judging the practicality of an FEA study.

  17. Demographic differences in sport performers' experiences of organizational stressors.

    PubMed

    Arnold, R; Fletcher, D; Daniels, K

    2016-03-01

    Organizational stressors are particularly prevalent across sport performers' experiences and can influence their performance, health, and well-being. Research has been conducted to identify which organizational stressors are encountered by sport performers, but little is known about how these experiences vary from athlete to athlete. The purpose of this study was to examine if the frequency, intensity, and duration of the organizational stressors that sport performers encounter vary as a function of gender, sport type, and performance level. Participants (n = 1277) completed the Organizational Stressor Indicator for Sport Performers (OSI-SP; Arnold et al., 2013), and the resultant data were analyzed using multivariate analyses of covariance. The findings show that demographic differences are apparent in the dimensions of the goals and development, logistics and operations, team and culture, coaching, and selection organizational stressors that sport performers encounter. More specifically, significant differences were found between males and females, between team and individual-based performers, and between performers competing at national or international, regional or university, and county or club levels. These findings have important implications for theory and research on organizational stress, and for the development of stress management interventions with sport performers. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Preparing for the Validation Visit--Guidelines for Optimizing the Experience.

    ERIC Educational Resources Information Center

    Osborn, Hazel A.

    2003-01-01

    Urges child care programs to seek accreditation from NAEYC's National Academy of Early Childhood Programs to increase program quality and provides information on the validation process. Includes information on the validation visit and the validator's role and background. Offers suggestions for preparing the director, staff, children, and families…

  19. Closing the patient experience chasm: A two-level validation of the Consumer Quality Index Inpatient Hospital Care.

    PubMed

    Smirnova, Alina; Lombarts, Kiki M J M H; Arah, Onyebuchi A; van der Vleuten, Cees P M

    2017-10-01

    Evaluation of patients' health care experiences is central to measuring patient-centred care. However, different instruments tend to be used at the hospital or departmental level but rarely both, leading to a lack of standardization of patient experience measures. To validate the Consumer Quality Index (CQI) Inpatient Hospital Care for use on both department and hospital levels. Using cross-sectional observational data, we investigated the internal validity of the questionnaire using confirmatory factor analyses (CFA), and the generalizability of the questionnaire for use at the department and hospital levels using generalizability theory. 22924 adults hospitalized for ≥24 hours between 1 January 2013 and 31 December 2014 in 23 Dutch hospitals (515 department evaluations). CQI Inpatient Hospital Care questionnaire. CFA results showed a good fit on individual level (CFI=0.96, TLI=0.95, RMSEA=0.04), which was comparable between specialties. When scores were aggregated to the department level, the fit was less desirable (CFI=0.83, TLI=0.81, RMSEA=0.06), and there was a significant overlap between communication with doctors and explanation of treatment subscales. Departments and hospitals explained ≤5% of total variance in subscale scores. In total, 4-8 departments and 50 respondents per department are needed to reliably evaluate subscales rated on a 4-point scale, and 10 departments with 100-150 respondents per department for binary subscales. The CQI Inpatient Hospital Care is a valid and reliable questionnaire to evaluate inpatient experiences in Dutch hospitals provided sufficient sampling is done. Results can facilitate meaningful comparisons and guide quality improvement activities in individual departments and hospitals. © 2017 The Authors Health Expectations Published by John Wiley & Sons Ltd.

  20. Validating a large geophysical data set: Experiences with satellite-derived cloud parameters

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph; Haskins, Robert D.; Knighton, James E.; Pursch, Andrew; Granger-Gallegos, Stephanie

    1992-01-01

    We are validating the global cloud parameters derived from the satellite-borne HIRS2 and MSU atmospheric sounding instrument measurements, and are using the analysis of these data as one prototype for studying large geophysical data sets in general. The HIRS2/MSU data set contains a total of 40 physical parameters, filling 25 MB/day; raw HIRS2/MSU data are available for a period exceeding 10 years. Validation involves developing a quantitative sense for the physical meaning of the derived parameters over the range of environmental conditions sampled. This is accomplished by comparing the spatial and temporal distributions of the derived quantities with similar measurements made using other techniques, and with model results. The data handling needed for this work is possible only with the help of a suite of interactive graphical and numerical analysis tools. Level 3 (gridded) data is the common form in which large data sets of this type are distributed for scientific analysis. We find that Level 3 data is inadequate for the data comparisons required for validation. Level 2 data (individual measurements in geophysical units) is needed. A sampling problem arises when individual measurements, which are not uniformly distributed in space or time, are used for the comparisons. Standard 'interpolation' methods involve fitting the measurements for each data set to surfaces, which are then compared. We are experimenting with formal criteria for selecting geographical regions, based upon the spatial frequency and variability of measurements, that allow us to quantify the uncertainty due to sampling. As part of this project, we are also dealing with ways to keep track of constraints placed on the output by assumptions made in the computer code. The need to work with Level 2 data introduces a number of other data handling issues, such as accessing data files across machine types, meeting large data storage requirements, accessing other validated data sets, processing speed

  1. Retrieval and validation of carbon dioxide, methane and water vapor for the Canary Islands IR-laser occultation experiment

    NASA Astrophysics Data System (ADS)

    Proschek, V.; Kirchengast, G.; Schweitzer, S.; Brooke, J. S. A.; Bernath, P. F.; Thomas, C. B.; Wang, J.-G.; Tereszchuk, K. A.; González Abad, G.; Hargreaves, R. J.; Beale, C. A.; Harrison, J. J.; Martin, P. A.; Kasyutich, V. L.; Gerbig, C.; Loescher, A.

    2015-08-01

    The first ground-based experiment to prove the concept of a novel space-based observation technique for microwave and infrared-laser occultation between low-Earth-orbit satellites was performed in the Canary Islands between La Palma and Tenerife. For two nights from 21 to 22 July 2011 the experiment delivered the infrared-laser differential transmission principle for the measurement of greenhouse gases (GHGs) in the free atmosphere. Such global and long-term stable measurements of GHGs, accompanied also by measurements of thermodynamic parameters and line-of-sight wind in a self-calibrating way, have become very important for climate change monitoring. The experiment delivered promising initial data for demonstrating the new observation concept by retrieving volume mixing ratios of GHGs along a ~144 km signal path at altitudes of ~2.4 km. Here, we present a detailed analysis of the measurements, following a recent publication that introduced the experiment's technical setup and first results for an example retrieval of CO2. We present the observational and validation data sets, the latter simultaneously measured at the transmitter and receiver sites; the measurement data handling; and the differential transmission retrieval procedure. We also determine the individual and combined uncertainties influencing the results and present the retrieval results for 12CO2, 13CO2, C18OO, H2O and CH4. The new method is found to have a reliable basis for monitoring of greenhouse gases such as CO2, CH4, and H2O in the free atmosphere.

  2. Mice Drawer System (MDS): procedures performed on-orbit during experiment phase

    NASA Astrophysics Data System (ADS)

    Ciparelli, Paolo; Falcetti, Giancarlo; Tenconi, Chiara; Pignataro, Salvatore; Cotronei, Vittorio

    Mice Drawer System is a payload that can be integrated inside the Space Shuttle middeck during transportation to/from the ISS, and inside the Express Rack in the ISS during experi-ment execution. It is designed to perform experiment as much automatically as possible; only maintenance activities require procedures involving crew. The first MDS experiment has been performed with Shuttle STS-128, launched in August, 28 2009 at EDT time 23:58 (06:58 Italian time). During the permanence in the Shuttle, MDS was switched on in SURVIVAL mode, cooled by air from rear part of the middeck: this mode allows to supply water and night-and-day cycles to mice in automatic mode, but not food that was supplied ad libitum before launch by a dedicated food bar inserted inside the cage. In this phase, a visual check has been performed every day by crew to verify the well-being of the mice. During the permanence in ISS, MDS was switched on in EXPERIMENT mode, cooled by water from EXPRESS RACK. In this case, MDS experiment was completely automatic: water, food, night-and-day cycles were commanded every day by the payload. Only Maintenance activities to replace consumable items and to fill the potable water reservoir were foreseen and executed by the crew. Food Envelope replacement was foreseen every 19 days, the Waste Filter replacement has been performed every 30 days. Potable Water Reservoir refilling has been performed every 9 days. Nominal activities performed on ISS were also the transfer from Shuttle to ISS and reconfiguration from ascent to on-orbit operation after launch. The reconfiguration from on-orbit to descent and transfer from ISS to Shuttle has been performed before Shuttle undock and landing.

  3. Assessing Discriminative Performance at External Validation of Clinical Prediction Models

    PubMed Central

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.

    2016-01-01

    Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect

  4. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    PubMed

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W

    2016-01-01

    External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  5. Performance of automatic scanning microscope for nuclear emulsion experiments

    NASA Astrophysics Data System (ADS)

    Güler, A. Murat; Altınok, Özgür

    2015-12-01

    The impressive improvements in scanning technology and methods let nuclear emulsion to be used as a target in recent large experiments. We report the performance of an automatic scanning microscope for nuclear emulsion experiments. After successful calibration and alignment of the system, we have reached 99% tracking efficiency for the minimum ionizing tracks that penetrating through the emulsions films. The automatic scanning system is successfully used for the scanning of emulsion films in the OPERA experiment and plan to use for the next generation of nuclear emulsion experiments.

  6. Performance of automatic scanning microscope for nuclear emulsion experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Güler, A. Murat, E-mail: mguler@newton.physics.metu.edu.tr; Altınok, Özgür; Tufts University, Medford, MA 02155

    The impressive improvements in scanning technology and methods let nuclear emulsion to be used as a target in recent large experiments. We report the performance of an automatic scanning microscope for nuclear emulsion experiments. After successful calibration and alignment of the system, we have reached 99% tracking efficiency for the minimum ionizing tracks that penetrating through the emulsions films. The automatic scanning system is successfully used for the scanning of emulsion films in the OPERA experiment and plan to use for the next generation of nuclear emulsion experiments.

  7. Validation of a dye stain assay for vaginally inserted HEC-filled microbicide applicators

    PubMed Central

    Katzen, Lauren L.; Fernández-Romero, José A.; Sarna, Avina; Murugavel, Kailapuri G.; Gawarecki, Daniel; Zydowsky, Thomas M.; Mensch, Barbara S.

    2011-01-01

    Background The reliability and validity of self-reports of vaginal microbicide use are questionable given the explicit understanding that participants are expected to comply with study protocols. Our objective was to optimize the Population Council's previously validated dye stain assay (DSA) and related procedures, and establish predictive values for the DSA's ability to identify vaginally inserted single-use, low-density polyethylene microbicide applicators filled with hydroxyethylcellulose gel. Methods Applicators, inserted by 252 female sex workers enrolled in a microbicide feasibility study in Southern India, served as positive controls for optimization and validation experiments. Prior to validation, optimal dye concentration and staining time were ascertained. Three validation experiments were conducted to determine sensitivity, specificity, negative predictive values and positive predictive values. Results The dye concentration of 0.05% (w/v) FD&C Blue No. 1 Granular Food Dye and staining time of five seconds were determined to be optimal and were used for the three validation experiments. There were a total of 1,848 possible applicator readings across validation experiments; 1,703 (92.2%) applicator readings were correct. On average, the DSA performed with 90.6% sensitivity, 93.9% specificity, and had a negative predictive value of 93.8% and a positive predictive value of 91.0%. No statistically significant differences between experiments were noted. Conclusions The DSA was optimized and successfully validated for use with single-use, low-density polyethylene applicators filled with hydroxyethylcellulose (HEC) gel. We recommend including the DSA in future microbicide trials involving vaginal gels in order to identify participants who have low adherence to dosing regimens. In doing so, we can develop strategies to improve adherence as well as investigate the association between product use and efficacy. PMID:21992983

  8. Measuring Educators' Attitudes and Beliefs about Evaluation: Construct Validity and Reliability of the Teacher Evaluation Experience Scale

    ERIC Educational Resources Information Center

    Reddy, Linda A.; Dudek, Christopher M.; Kettler, Ryan J.; Kurz, Alexander; Peters, Stephanie

    2016-01-01

    This study presents the reliability and validity of the Teacher Evaluation Experience Scale--Teacher Form (TEES-T), a multidimensional measure of educators' attitudes and beliefs about teacher evaluation. Confirmatory factor analyses of data from 583 teachers were conducted on the TEES-T hypothesized five-factor model, as well as on alternative…

  9. An Experiment on the Limits of Quantum Electro-dynamics

    DOE R&D Accomplishments Database

    Barber, W. C.; Richter, B.; Panofsky, W. K. H.; O'Neill, G. K.; Gittelman, B.

    1959-06-01

    The limitations of previously performed or suggested electrodynamic cutoff experiments are reviewed, and an electron-electron scattering experiment to be performed with storage rings to investigate further the limits of the validity of quantum electrodynamics is described. The foreseen experimental problems are discussed, and the results of the associated calculations are given. The parameters and status of the equipment are summarized. (D.C.W.)

  10. Prevalence of Invalid Performance on Baseline Testing for Sport-Related Concussion by Age and Validity Indicator.

    PubMed

    Abeare, Christopher A; Messa, Isabelle; Zuccato, Brandon G; Merker, Bradley; Erdodi, Laszlo

    2018-03-12

    Estimated base rates of invalid performance on baseline testing (base rates of failure) for the management of sport-related concussion range from 6.1% to 40.0%, depending on the validity indicator used. The instability of this key measure represents a challenge in the clinical interpretation of test results that could undermine the utility of baseline testing. To determine the prevalence of invalid performance on baseline testing and to assess whether the prevalence varies as a function of age and validity indicator. This retrospective, cross-sectional study included data collected between January 1, 2012, and December 31, 2016, from a clinical referral center in the Midwestern United States. Participants included 7897 consecutively tested, equivalently proportioned male and female athletes aged 10 to 21 years, who completed baseline neurocognitive testing for the purpose of concussion management. Baseline assessment was conducted with the Immediate Postconcussion Assessment and Cognitive Testing (ImPACT), a computerized neurocognitive test designed for assessment of concussion. Base rates of failure on published ImPACT validity indicators were compared within and across age groups. Hypotheses were developed after data collection but prior to analyses. Of the 7897 study participants, 4086 (51.7%) were male, mean (SD) age was 14.71 (1.78) years, 7820 (99.0%) were primarily English speaking, and the mean (SD) educational level was 8.79 (1.68) years. The base rate of failure ranged from 6.4% to 47.6% across individual indicators. Most of the sample (55.7%) failed at least 1 of 4 validity indicators. The base rate of failure varied considerably across age groups (117 of 140 [83.6%] for those aged 10 years to 14 of 48 [29.2%] for those aged 21 years), representing a risk ratio of 2.86 (95% CI, 2.60-3.16; P < .001). The results for base rate of failure were surprisingly high overall and varied widely depending on the specific validity indicator and the age of the

  11. Validity And Practicality of Experiment Integrated Guided Inquiry-Based Module on Topic of Colloidal Chemistry for Senior High School Learning

    NASA Astrophysics Data System (ADS)

    Andromeda, A.; Lufri; Festiyed; Ellizar, E.; Iryani, I.; Guspatni, G.; Fitri, L.

    2018-04-01

    This Research & Development study aims to produce a valid and practical experiment integrated guided inquiry based module on topic of colloidal chemistry. 4D instructional design model was selected in this study. Limited trial of the product was conducted at SMAN 7 Padang. Instruments used were validity and practicality questionnaires. Validity and practicality data were analyzed using Kappa moment. Analysis of the data shows that Kappa moment for validity was 0.88 indicating a very high degree of validity. Kappa moments for the practicality from students and teachers were 0.89 and 0.95 respectively indicating high degree of practicality. Analysis on the module filled in by students shows that 91.37% students could correctly answer critical thinking, exercise, prelab, postlab and worksheet questions asked in the module. These findings indicate that the integrated guided inquiry based module on topic of colloidal chemistry was valid and practical for chemistry learning in senior high school.

  12. Fundamentals of endoscopic surgery: creation and validation of the hands-on test.

    PubMed

    Vassiliou, Melina C; Dunkin, Brian J; Fried, Gerald M; Mellinger, John D; Trus, Thadeus; Kaneva, Pepa; Lyons, Calvin; Korndorffer, James R; Ujiki, Michael; Velanovich, Vic; Kochman, Michael L; Tsuda, Shawn; Martinez, Jose; Scott, Daniel J; Korus, Gary; Park, Adrian; Marks, Jeffrey M

    2014-03-01

    The Fundamentals of Endoscopic Surgery™ (FES) program consists of online materials and didactic and skills-based tests. All components were designed to measure the skills and knowledge required to perform safe flexible endoscopy. The purpose of this multicenter study was to evaluate the reliability and validity of the hands-on component of the FES examination, and to establish the pass score. Expert endoscopists identified the critical skill set required for flexible endoscopy. They were then modeled in a virtual reality simulator (GI Mentor™ II, Simbionix™ Ltd., Airport City, Israel) to create five tasks and metrics. Scores were designed to measure both speed and precision. Validity evidence was assessed by correlating performance with self-reported endoscopic experience (surgeons and gastroenterologists [GIs]). Internal consistency of each test task was assessed using Cronbach's alpha. Test-retest reliability was determined by having the same participant perform the test a second time and comparing their scores. Passing scores were determined by a contrasting groups methodology and use of receiver operating characteristic curves. A total of 160 participants (17 % GIs) performed the simulator test. Scores on the five tasks showed good internal consistency reliability and all had significant correlations with endoscopic experience. Total FES scores correlated 0.73, with participants' level of endoscopic experience providing evidence of their validity, and their internal consistency reliability (Cronbach's alpha) was 0.82. Test-retest reliability was assessed in 11 participants, and the intraclass correlation was 0.85. The passing score was determined and is estimated to have a sensitivity (true positive rate) of 0.81 and a 1-specificity (false positive rate) of 0.21. The FES hands-on skills test examines the basic procedural components required to perform safe flexible endoscopy. It meets rigorous standards of reliability and validity required for high

  13. Minimizing false positive error with multiple performance validity tests: response to Bilder, Sugar, and Hellemann (2014 this issue).

    PubMed

    Larrabee, Glenn J

    2014-01-01

    Bilder, Sugar, and Hellemann (2014 this issue) contend that empirical support is lacking for use of multiple performance validity tests (PVTs) in evaluation of the individual case, differing from the conclusions of Davis and Millis (2014), and Larrabee (2014), who found no substantial increase in false positive rates using a criterion of failure of ≥ 2 PVTs and/or Symptom Validity Tests (SVTs) out of multiple tests administered. Reconsideration of data presented in Larrabee (2014) supports a criterion of ≥ 2 out of up to 7 PVTs/SVTs, as keeping false positive rates close to and in most cases below 10% in cases with bona fide neurologic, psychiatric, and developmental disorders. Strategies to minimize risk of false positive error are discussed, including (1) adjusting individual PVT cutoffs or criterion for number of PVTs failed, for examinees who have clinical histories placing them at risk for false positive identification (e.g., severe TBI, schizophrenia), (2) using the history of the individual case to rule out conditions known to result in false positive errors, (3) using normal performance in domains mimicked by PVTs to show that sufficient native ability exists for valid performance on the PVT(s) that have been failed, and (4) recognizing that as the number of PVTs/SVTs failed increases, the likelihood of valid clinical presentation decreases, with a corresponding increase in the likelihood of invalid test performance and symptom report.

  14. Predeployment validation of fault-tolerant systems through software-implemented fault insertion

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1989-01-01

    Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.

  15. CryoSat-2: Post launch performance of SIRAL-2 and its calibration/validation

    NASA Astrophysics Data System (ADS)

    Cullen, Robert; Francis, Richard; Davidson, Malcolm; Wingham, Duncan

    2010-05-01

    1. INTRODUCTION The main payload of CryoSat-2 [1], SIRAL (Synthetic interferometric radar altimeter), is a Ku band pulse-width limited radar altimeter which transmits pulses at a high pulse repetition frequency thus making received echoes phase coherent and suitable for azimuth processing [2]. The azimuth processing in conjunction with correction for slant range improves along track resolution to about 250 meters which is a significant improvement over traditional pulse-width limited systems such as Envisat RA-2, [3]. CryoSat-2 will be launched on 25th February 2010 and this paper describes the pre and post launch measures of CryoSat/SIRAL performance and the status of mission validation planning. 2. SIRAL PERFORMANCE: INTERNAL AND EXTERNAL CALIBRATION Phase coherent pulse-width limited radar altimeters such as SIRAL-2 pose a new challenge when considering a strategy for calibration. Along with the need to generate the well understood corrections for transfer function amplitude with respect to frequency, gain and instrument path delay there is also a need to provide corrections for transfer function phase with respect to frequency and AGC setting, phase variation across bursts of pulses. Furthermore, since some components of these radars are temperature sensitive one needs to be careful when the deciding how often calibrations are performed whilst not impacting mission performance. Several internal calibration ground processors have been developed to model imperfections within the CryoSat-2 radar altimeter (SIRAL-2) hardware and reduce their effect from the science data stream via the use of calibration correction auxiliary products within the ground segment. We present the methods and results used to model and remove imperfections and describe the baseline for usage of SIRAL-2 calibration modes during the commissioning phase and the operational exploitation phases of the mission. Additionally we present early results derived from external calibration of SIRAL via

  16. The Second Victim Experience and Support Tool: Validation of an Organizational Resource for Assessing Second Victim Effects and the Quality of Support Resources.

    PubMed

    Burlison, Jonathan D; Scott, Susan D; Browne, Emily K; Thompson, Sierra G; Hoffman, James M

    2017-06-01

    Medical errors and unanticipated negative patient outcomes can damage the well-being of health care providers. These affected individuals, referred to as "second victims," can experience various psychological and physical symptoms. Support resources provided by health care organizations to prevent and reduce second victim-related harm are often inadequate. In this study, we present the development and psychometric evaluation of the Second Victim Experience and Support Tool (SVEST), a survey instrument that can assist health care organizations to implement and track the performance of second victim support resources. The SVEST (29 items representing 7 dimensions and 2 outcome variables) was completed by 303 health care providers involved in direct patient care. The survey collected responses on second victim-related psychological and physical symptoms and the quality of support resources. Desirability of possible support resources was also measured. The SVEST was assessed for content validity, internal consistency, and construct validity with confirmatory factor analysis. Confirmatory factor analysis results suggested good model fit for the survey. Cronbach α reliability scores for the survey dimensions ranged from 0.61 to 0.89. The most desired second victim support option was "A respected peer to discuss the details of what happened." The SVEST can be used by health care organizations to evaluate second victim experiences of their staff and the quality of existing support resources. It can also provide health care organization leaders with information on second victim-related support resources most preferred by their staff. The SVEST can be administered before and after implementing new second victim resources to measure perceptions of effectiveness.

  17. Assessing the validity and reliability of the Malagasy version of Oral Impacts on Daily Performance (OIDP): a cross-sectional study.

    PubMed

    Razanamihaja, Noeline; Ranivoharilanto, Eva

    2017-01-01

    Evaluating health needs includes measures of the impact of state of health on the quality of life. This entails evaluating the psychosocial aspects of health. To achieve this, several tools for measuring the quality of life related to oral health have been developed. However, it is vital to evaluate the psychometric properties of these tools so they can be used in a new context and on a new population. The purpose of this study was to evaluate the reliability and validity of the Malagasy version of a questionnaire for studying the impacts of oral-dental health on daily activities (Oral Impacts on Daily Performance), and analyse the interrelations between the scores obtained and the oral health indicators. A cross-sectional study was performed for the transcultural adaptation of the Oral Impacts on Daily Performance questionnaire forward translated and back-translated from English to Malagasy and from Malagasy to English, respectively. The psychometric characteristics of the Malagasy version of the Oral Impacts on Daily Performance were then evaluated in terms of internal reliability, test-retest, and construct, criteria and discriminant validity. Four hundred and six adults responded in face-to-face interviews to the Malagasy version of the Oral Impacts on Daily Performance questionnaire. Nearly 74% of the participants indicated impacts of their oral health on their performance in their daily lives during the 6 months prior to the survey. The activities most affected were: "smiling", "eating" and "sleeping and relaxing". Cronbach's alpha was 0.87. The construct validity was demonstrated by a significant association between the Oral Impacts on Daily Performance scores and the subjective evaluation of oral health ( p <0.001). Discriminant validity was demonstrated by the fact that the Oral Impacts on Daily Performance scores were significantly higher in subjects with more than ten missing teeth, compared to those with fewer than ten missing teeth ( p  < 0

  18. On the feasibility to perform integral transmission experiments in the GELINA target hall at IRMM

    NASA Astrophysics Data System (ADS)

    Leconte, Pierre; Jean, Cyrille De Saint; Geslot, Benoit; Plompen, Arjan; Belloni, Francesca; Nyman, Markus

    2017-09-01

    Shielding experiments are relevant to validate elastic and inelastic scattering cross sections in the fast energy range. In this paper, we are focusing on the possibility to use the pulsed white neutron time-of-flight facility GELINA to perform this kind of measurement. Several issues need to be addressed: neutron source intensity, room return effect, distance of the materials to be irradiated from the source, and the sensitivity of various reaction rate distributions through the material to different input cross sections. MCNP6 and TRIPOLI4 calculations of the outgoing neutron spectrum are compared, based on electron/positron/gamma/neutron simulations. A first guess of an integral transmission experiment through a 238U slab is considered. It shows that a 10 cm thickness of uranium is sufficient to reach a high sensitivity to the 238U inelastic scattering cross section in the [2-5 MeV] energy range, with small contributions from elastic and fission cross sections. This experiment would contribute to reduce the uncertainty on this nuclear data, which has a significant impact on the power distribution in large commercial reactors. Other materials that would be relevant for the ASTRID 4th generation prototype reactor are also tested, showing that a sufficient sensitivity to nuclear data would be obtained by using a 50 to 100cm thick slab of side 60x60cm. This study concludes on the feasibility and interest of such experiments in the target hall of the GELINA facility.

  19. Model performance evaluation (validation and calibration) in model-based studies of therapeutic interventions for cardiovascular diseases : a review and suggested reporting framework.

    PubMed

    Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan

    2013-04-01

    Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed

  20. Assessment of performance validity in the Stroop Color and Word Test in mild traumatic brain injury patients: a criterion-groups validation design.

    PubMed

    Guise, Brian J; Thompson, Matthew D; Greve, Kevin W; Bianchini, Kevin J; West, Laura

    2014-03-01

    The current study assessed performance validity on the Stroop Color and Word Test (Stroop) in mild traumatic brain injury (TBI) using criterion-groups validation. The sample consisted of 77 patients with a reported history of mild TBI. Data from 42 moderate-severe TBI and 75 non-head-injured patients with other clinical diagnoses were also examined. TBI patients were categorized on the basis of Slick, Sherman, and Iverson (1999) criteria for malingered neurocognitive dysfunction (MND). Classification accuracy is reported for three indicators (Word, Color, and Color-Word residual raw scores) from the Stroop across a range of injury severities. With false-positive rates set at approximately 5%, sensitivity was as high as 29%. The clinical implications of these findings are discussed. © 2012 The British Psychological Society.

  1. Establishing the reliability and concurrent validity of physical performance tests using virtual reality equipment for community-dwelling healthy elders.

    PubMed

    Griswold, David; Rockwell, Kyle; Killa, Carri; Maurer, Michael; Landgraff, Nancy; Learman, Ken

    2015-01-01

    The aim of this study was to determine the reliability and concurrent validity of commonly used physical performance tests using the OmniVR Virtual Rehabilitation System for healthy community-dwelling elders. Participants (N = 40) were recruited by the authors and were screened for eligibility. The initial method of measurement was randomized to either virtual reality (VR) or clinically based measures (CM). Physical performance tests included the five times sit to stand, Timed Up and Go (TUG), Forward Functional Reach (FFR) and 30-s stand test. A random number generator determined the testing order. The test-re-test reliability for the VR and CM was determined. Furthermore, concurrent validity was determined using a Pearson product moment correlation (Pearson r). The VR demonstrated excellent reliability for 5 × STS intraclass correlation coefficient (ICC) = 0.931(3,1), FFR ICC = 0.846(3,1) and the TUG ICC = 0.944(3,1). The concurrent validity data for the VR and CM (ICC 3, k) were moderate for FFR ICC = 0.682, excellent 5 × STS ICC = 0.889 and excellent for the TUG ICC = 0.878. The concurrent validity of the 30-s stand test was good ICC = 0.735(3,1). This study supports the use of VR equipment for measuring physical performance tests in the clinic for healthy community-dwelling elders. Virtual reality equipment is not only used to treat balance impairments but it is also used to measure and determine physical impairments through the use of physical performance tests. Virtual reality equipment is a reliable and valid tool for collecting physical performance data for the 5 × STS, FFR, TUG and 30-s stand test for healthy community-dwelling elders.

  2. Ensuring the validity of calculated subcritical limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, H.K.

    1977-01-01

    The care taken at the Savannah River Laboratory and Plant to ensure the validity of calculated subcritical limits is described. Close attention is given to ANSI N16.1-1975, ''Validation of Calculational Methods for Nuclear Criticality Safety.'' The computer codes used for criticality safety computations, which are listed and are briefly described, have been placed in the SRL JOSHUA system to facilitate calculation and to reduce input errors. A driver module, KOKO, simplifies and standardizes input and links the codes together in various ways. For any criticality safety evaluation, correlations of the calculational methods are made with experiment to establish bias. Occasionallymore » subcritical experiments are performed expressly to provide benchmarks. Calculated subcritical limits contain an adequate but not excessive margin to allow for uncertainty in the bias. The final step in any criticality safety evaluation is the writing of a report describing the calculations and justifying the margin.« less

  3. Assessing performance and validating finite element simulations using probabilistic knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolin, Ronald M.; Rodriguez, E. A.

    Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less

  4. The Validity and Contributing Physiological Factors to 30-15 Intermittent Fitness Test Performance in Rugby League.

    PubMed

    Scott, Tannath J; Duthie, Grant M; Delaney, Jace A; Sanctuary, Colin E; Ballard, David A; Hickmans, Jeremy A; Dascombe, Ben J

    2017-09-01

    Scott, TJ, Duthie, GM, Delaney, JA, Sanctuary, CE, Ballard, DA, Hickmans, JA, and Dascombe, BJ. The validity and contributing physiological factors to 30-15 intermittent fitness test performance in rugby league. J Strength Cond Res 31(9): 2409-2416, 2017-This study examined the validity of the 30-15 Intermittent Fitness Test (30-15IFT) within rugby league. Sixty-three Australian elite and junior-elite rugby league players (22.5 ± 4.5 years, 96.1 ± 9.5 kg, Σ7 skinfolds: 71.0 ± 18.7 mm) from a professional club participated in this study. Players were assessed for anthropometry (body mass, Σ7 skinfolds, lean mass index), prolonged high-intensity intermittent running (PHIR; measured by 30-15IFT), predicted aerobic capacity (MSFT) and power (AAS), speed (40 m sprint), repeated sprint, and change of direction (COD-505 agility test) ability before and after an 11-week preseason training period. Validity of the 30-15IFT was established using Pearson's coefficient correlations. Forward stepwise regression model identified the fewest variables that could predict individual final velocity (VIFT) and change within 30-15IFT performance. Significant correlations between VIFT and Σ7 skinfolds, repeated sprint decrement, V[Combining Dot Above]O2maxMSFT, and average aerobic speed were observed. A total of 71.8% of the adjusted variance in 30-15IFT performance was explained using a 4-step best fit model (V[Combining Dot Above]O2maxMSFT, 61.4%; average aerobic speed, 4.7%; maximal velocity, 4.1%; lean mass index, 1.6%). Across the training period, 25% of the variance was accounted by ΔV[Combining Dot Above]O2maxMSFT (R = 0.25). These relationships suggest that the 30-15IFT is a valid test of PHIR within rugby league. Poor correlations were observed with measures of acceleration, speed, and COD. These findings demonstrate that although the 30-15IFT is a valid measure of PHIR, it also simultaneously examines various physiological capacities that differ between sporting cohorts.

  5. Predictive Validity of the Air Force Officer Qualifying Test for USAF Air Battle Manager Training Performance

    DTIC Science & Technology

    2008-09-01

    performance criteria including passing/failing training, training grades, class rank (Carretta & Ree, 2003; Olea & Ree, 1994), and several non...are consistent with prior validations of the AFOQT versus academic performance criteria in pilot (Carretta & Ree, 1995; Olea & Ree, 1994; Ree...Carretta, & Teachout, 1995)) and navigator ( Olea & Ree, 1994) training. Subsequent analyses took three different approaches to examine the

  6. Mental workload and performance experiment (15-IML-1)

    NASA Technical Reports Server (NTRS)

    Alexander, Harold L.

    1992-01-01

    Whether on Earth or in space, people tend to work more productively in settings designed for efficiency and comfort. Because comfortable and stress-free working environments enhance performance and contribute to congenial relationships among co-workers, the living and working arrangements for spacecraft to be used for missions lasting months or years assume particular importance. The Mental Workload and Performance Experiment (MWPE), in part, examines the appropriate design of workstations for performance of various tasks in microgravity, by providing a variable-configuration workstation that may be adjusted by the astronauts.

  7. VALUE - A Framework to Validate Downscaling Approaches for Climate Change Studies

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Widmann, Martin; Gutiérrez, José M.; Kotlarski, Sven; Chandler, Richard E.; Hertig, Elke; Wibig, Joanna; Huth, Radan; Wilke, Renate A. I.

    2015-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. VALUE aims to foster collaboration and knowledge exchange between climatologists, impact modellers, statisticians, and stakeholders to establish an interdisciplinary downscaling community. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. Here, we present the key ingredients of this framework. VALUE's main approach to validation is user-focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur: what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Do methods fail in representing regional climate change? How is the overall representation of regional climate, including errors inherited from global climate models? The framework will be the basis for a comprehensive community-open downscaling intercomparison study, but is intended also to provide general guidance for other validation studies.

  8. VALUE: A framework to validate downscaling approaches for climate change studies

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Widmann, Martin; Gutiérrez, José M.; Kotlarski, Sven; Chandler, Richard E.; Hertig, Elke; Wibig, Joanna; Huth, Radan; Wilcke, Renate A. I.

    2015-01-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. VALUE aims to foster collaboration and knowledge exchange between climatologists, impact modellers, statisticians, and stakeholders to establish an interdisciplinary downscaling community. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. In this paper, we present the key ingredients of this framework. VALUE's main approach to validation is user- focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur: what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Do methods fail in representing regional climate change? How is the overall representation of regional climate, including errors inherited from global climate models? The framework will be the basis for a comprehensive community-open downscaling intercomparison study, but is intended also to provide general guidance for other validation studies.

  9. Development and Validation of an Assessment Instrument for Course Experience in a General Education Integrated Science Course

    ERIC Educational Resources Information Center

    Liu, Juhong Christie; St. John, Kristen; Courtier, Anna M. Bishop

    2017-01-01

    Identifying instruments and surveys to address geoscience education research (GER) questions is among the high-ranked needs in a 2016 survey of the GER community (St. John et al., 2016). The purpose of this study was to develop and validate a student-centered assessment instrument to measure course experience in a general education integrated…

  10. Simultaneous quantification of withanolides in Withania somnifera by a validated high-performance thin-layer chromatographic method.

    PubMed

    Srivastava, Pooja; Tiwari, Neerja; Yadav, Akhilesh K; Kumar, Vijendra; Shanker, Karuna; Verma, Ram K; Gupta, Madan M; Gupta, Anil K; Khanuja, Suman P S

    2008-01-01

    This paper describes a sensitive, selective, specific, robust, and validated densitometric high-performance thin-layer chromatographic (HPTLC) method for the simultaneous determination of 3 key withanolides, namely, withaferin-A, 12-deoxywithastramonolide, and withanolide-A, in Ashwagandha (Withania somnifera) plant samples. The separation was performed on aluminum-backed silica gel 60F254 HPTLC plates using dichloromethane-methanol-acetone-diethyl ether (15 + 1 + 1 + 1, v/v/v/v) as the mobile phase. The withanolides were quantified by densitometry in the reflection/absorption mode at 230 nm. Precise and accurate quantification could be performed in the linear working concentration range of 66-330 ng/band with good correlation (r2 = 0.997, 0.999, and 0.996, respectively). The method was validated for recovery, precision, accuracy, robustness, limit of detection, limit of quantitation, and specificity according to International Conference on Harmonization guidelines. Specificity of quantification was confirmed using retention factor (Rf) values, UV-Vis spectral correlation, and electrospray ionization mass spectra of marker compounds in sample tracks.

  11. A user-targeted synthesis of the VALUE perfect predictor experiment

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Widmann, Martin; Gutierrez, Jose; Kotlarski, Sven; Hertig, Elke; Wibig, Joanna; Rössler, Ole; Huth, Radan

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. VALUE's main approach to validation is user-focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. We consider different aspects: (1) marginal aspects such as mean, variance and extremes; (2) temporal aspects such as spell length characteristics; (3) spatial aspects such as the de-correlation length of precipitation extremes; and multi-variate aspects such as the interplay of temperature and precipitation or scale-interactions. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur. Experiment 1 (perfect predictors): what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Experiment 2 (Global climate model predictors): how is the overall representation of regional climate, including errors inherited from global climate models? Experiment 3 (pseudo reality): do methods fail in representing regional climate change? Here, we present a user-targeted synthesis of the results of the first VALUE experiment. In this experiment, downscaling methods are driven with ERA-Interim reanalysis data to eliminate global climate model errors, over the period 1979-2008. As reference data we use, depending on the question addressed, (1) observations from 86 meteorological stations distributed across Europe; (2) gridded observations at the corresponding 86 locations or (3) gridded spatially extended observations for selected European regions. With more than 40 contributing methods, this study is the most comprehensive downscaling inter-comparison project so far. The

  12. Assessment of construct validity of a virtual reality laparoscopy simulator.

    PubMed

    Rosenthal, Rachel; Gantert, Walter A; Hamel, Christian; Hahnloser, Dieter; Metzger, Juerg; Kocher, Thomas; Vogelbach, Peter; Scheidegger, Daniel; Oertli, Daniel; Clavien, Pierre-Alain

    2007-08-01

    The aim of this study was to assess whether virtual reality (VR) can discriminate between the skills of novices and intermediate-level laparoscopic surgical trainees (construct validity), and whether the simulator assessment correlates with an expert's evaluation of performance. Three hundred and seven (307) participants of the 19th-22nd Davos International Gastrointestinal Surgery Workshops performed the clip-and-cut task on the Xitact LS 500 VR simulator (Xitact S.A., Morges, Switzerland). According to their previous experience in laparoscopic surgery, participants were assigned to the basic course (BC) or the intermediate course (IC). Objective performance parameters recorded by the simulator were compared to the standardized assessment by the course instructors during laparoscopic pelvitrainer and conventional surgery exercises. IC participants performed significantly better on the VR simulator than BC participants for the task completion time as well as the economy of movement of the right instrument, not the left instrument. Participants with maximum scores in the pelvitrainer cholecystectomy task performed the VR trial significantly faster, compared to those who scored less. In the conventional surgery task, a significant difference between those who scored the maximum and those who scored less was found not only for task completion time, but also for economy of movement of the right instrument. VR simulation provides a valid assessment of psychomotor skills and some basic aspects of spatial skills in laparoscopic surgery. Furthermore, VR allows discrimination between trainees with different levels of experience in laparoscopic surgery establishing construct validity for the Xitact LS 500 clip-and-cut task. Virtual reality may become the gold standard to assess and monitor surgical skills in laparoscopic surgery.

  13. A Youth Performing Arts Experience: Psychological Experiences, Recollections, and the Desire to Do It Again

    ERIC Educational Resources Information Center

    Trayes, Jan; Harre, Niki; Overall, Nickola C.

    2012-01-01

    Stage Challenge is a performing arts competition for New Zealand secondary schools. This longitudinal study used observations, repeated questionnaires, informal conversations, and a graffiti board to follow the 5-month experience of a student-led girls' team aged 10 to 17 years (n = 103). The focus was on the quality of their experience and what…

  14. Evaluation of a micro-scale wind model's performance over realistic building clusters using wind tunnel experiments

    NASA Astrophysics Data System (ADS)

    Zhang, Ning; Du, Yunsong; Miao, Shiguang; Fang, Xiaoyi

    2016-08-01

    The simulation performance over complex building clusters of a wind simulation model (Wind Information Field Fast Analysis model, WIFFA) in a micro-scale air pollutant dispersion model system (Urban Microscale Air Pollution dispersion Simulation model, UMAPS) is evaluated using various wind tunnel experimental data including the CEDVAL (Compilation of Experimental Data for Validation of Micro-Scale Dispersion Models) wind tunnel experiment data and the NJU-FZ experiment data (Nanjing University-Fang Zhuang neighborhood wind tunnel experiment data). The results show that the wind model can reproduce the vortexes triggered by urban buildings well, and the flow patterns in urban street canyons and building clusters can also be represented. Due to the complex shapes of buildings and their distributions, the simulation deviations/discrepancies from the measurements are usually caused by the simplification of the building shapes and the determination of the key zone sizes. The computational efficiencies of different cases are also discussed in this paper. The model has a high computational efficiency compared to traditional numerical models that solve the Navier-Stokes equations, and can produce very high-resolution (1-5 m) wind fields of a complex neighborhood scale urban building canopy (~ 1 km ×1 km) in less than 3 min when run on a personal computer.

  15. Evaluating the accuracy of the Wechsler Memory Scale-Fourth Edition (WMS-IV) logical memory embedded validity index for detecting invalid test performance.

    PubMed

    Soble, Jason R; Bain, Kathleen M; Bailey, K Chase; Kirton, Joshua W; Marceaux, Janice C; Critchfield, Edan A; McCoy, Karin J M; O'Rourke, Justin J F

    2018-01-08

    Embedded performance validity tests (PVTs) allow for continuous assessment of invalid performance throughout neuropsychological test batteries. This study evaluated the utility of the Wechsler Memory Scale-Fourth Edition (WMS-IV) Logical Memory (LM) Recognition score as an embedded PVT using the Advanced Clinical Solutions (ACS) for WAIS-IV/WMS-IV Effort System. This mixed clinical sample was comprised of 97 total participants, 71 of whom were classified as valid and 26 as invalid based on three well-validated, freestanding criterion PVTs. Overall, the LM embedded PVT demonstrated poor concordance with the criterion PVTs and unacceptable psychometric properties using ACS validity base rates (42% sensitivity/79% specificity). Moreover, 15-39% of participants obtained an invalid ACS base rate despite having a normatively-intact age-corrected LM Recognition total score. Receiving operating characteristic curve analysis revealed a Recognition total score cutoff of < 61% correct improved specificity (92%) while sensitivity remained weak (31%). Thus, results indicated the LM Recognition embedded PVT is not appropriate for use from an evidence-based perspective, and that clinicians may be faced with reconciling how a normatively intact cognitive performance on the Recognition subtest could simultaneously reflect invalid performance validity.

  16. Organizational and Market Influences on Physician Performance on Patient Experience Measures

    PubMed Central

    Rodriguez, Hector P; von Glahn, Ted; Rogers, William H; Safran, Dana Gelb

    2009-01-01

    Objective To examine the extent to which medical group and market factors are related to individual primary care physician (PCP) performance on patient experience measures. Data Sources This study employs Clinician and Group CAHPS survey data (n=105,663) from 2,099 adult PCPs belonging to 34 diverse medical groups across California. Medical group directors were interviewed to assess the magnitude and nature of financial incentives directed at individual physicians and the adoption of patient experience improvement strategies. Primary care services area (PCSA) data were used to characterize the market environment of physician practices. Study Design We used multilevel models to estimate the relationship between medical group and market factors and physician performance on each Clinician and Group CAHPS measure. Models statistically controlled for respondent characteristics and accounted for the clustering of respondents within physicians, physicians within medical groups, and medical groups within PCSAs using random effects. Principal Findings Compared with physicians belonging to independent practice associations, physicians belonging to integrated medical groups had better performance on the communication (p=.007) and care coordination (p=.03) measures. Physicians belonging to medical groups with greater numbers of PCPs had better performance on all measures. The use of patient experience improvement strategies was not associated with performance. Greater emphasis on productivity and efficiency criteria in individual physician financial incentive formulae was associated with worse access to care (p=.04). Physicians located in PCSAs with higher area-level deprivation had worse performance on the access to care (p=.04) and care coordination (p<.001) measures. Conclusions Physicians from integrated medical groups and groups with greater numbers of PCPs performed better on several patient experience measures, suggesting that organized care processes adopted by these

  17. Organizational and market influences on physician performance on patient experience measures.

    PubMed

    Rodriguez, Hector P; von Glahn, Ted; Rogers, William H; Safran, Dana Gelb

    2009-06-01

    To examine the extent to which medical group and market factors are related to individual primary care physician (PCP) performance on patient experience measures. This study employs Clinician and Group CAHPS survey data (n=105,663) from 2,099 adult PCPs belonging to 34 diverse medical groups across California. Medical group directors were interviewed to assess the magnitude and nature of financial incentives directed at individual physicians and the adoption of patient experience improvement strategies. Primary care services area (PCSA) data were used to characterize the market environment of physician practices. We used multilevel models to estimate the relationship between medical group and market factors and physician performance on each Clinician and Group CAHPS measure. Models statistically controlled for respondent characteristics and accounted for the clustering of respondents within physicians, physicians within medical groups, and medical groups within PCSAs using random effects. Compared with physicians belonging to independent practice associations, physicians belonging to integrated medical groups had better performance on the communication ( p=.007) and care coordination ( p=.03) measures. Physicians belonging to medical groups with greater numbers of PCPs had better performance on all measures. The use of patient experience improvement strategies was not associated with performance. Greater emphasis on productivity and efficiency criteria in individual physician financial incentive formulae was associated with worse access to care ( p=.04). Physicians located in PCSAs with higher area-level deprivation had worse performance on the access to care ( p=.04) and care coordination ( p<.001) measures. Physicians from integrated medical groups and groups with greater numbers of PCPs performed better on several patient experience measures, suggesting that organized care processes adopted by these groups may enhance patients' experiences. Physicians practicing

  18. Validity of the two-level model for Viterbi decoder gap-cycle performance

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Arnold, S.

    1990-01-01

    A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.

  19. Measuring women's childbirth experiences: a systematic review for identification and analysis of validated instruments.

    PubMed

    Nilvér, Helena; Begley, Cecily; Berg, Marie

    2017-06-29

    Women's childbirth experience can have immediate as well as long-term positive or negative effects on their life, well-being and health. When evaluating and drawing conclusions from research results, women's experiences of childbirth should be one aspect to consider. Researchers and clinicians need help in finding and selecting the most suitable instrument for their purpose. The aim of this study was therefore to systematically identify and present validated instruments measuring women's childbirth experience. A systematic review was conducted in January 2016 with a comprehensive search in the bibliographic databases PubMed, CINAHL, Scopus, The Cochrane Library and PsycINFO. Included instruments measured women's childbirth experiences. Papers were assessed independently by two reviewers for inclusion, and quality assessment of included instruments was made by two reviewers independently and in pairs using Terwee et al's criteria for evaluation of psychometric properties. In total 5189 citations were screened, of which 5106 were excluded by title and abstract. Eighty-three full-text papers were reviewed, and 37 papers were excluded, resulting in 46 included papers representing 36 instruments. These instruments demonstrated a wide range in purpose and content as well as in the quality of psychometric properties. This systematic review provides an overview of existing instruments measuring women's childbirth experiences and can support researchers to identify appropriate instruments to be used, and maybe adapted, in their specific contexts and research purpose.

  20. The Relationship Between Computer Experience and Computerized Cognitive Test Performance Among Older Adults

    PubMed Central

    2013-01-01

    Objective. This study compared the relationship between computer experience and performance on computerized cognitive tests and a traditional paper-and-pencil cognitive test in a sample of older adults (N = 634). Method. Participants completed computer experience and computer attitudes questionnaires, three computerized cognitive tests (Useful Field of View (UFOV) Test, Road Sign Test, and Stroop task) and a paper-and-pencil cognitive measure (Trail Making Test). Multivariate analysis of covariance was used to examine differences in cognitive performance across the four measures between those with and without computer experience after adjusting for confounding variables. Results. Although computer experience had a significant main effect across all cognitive measures, the effect sizes were similar. After controlling for computer attitudes, the relationship between computer experience and UFOV was fully attenuated. Discussion. Findings suggest that computer experience is not uniquely related to performance on computerized cognitive measures compared with paper-and-pencil measures. Because the relationship between computer experience and UFOV was fully attenuated by computer attitudes, this may imply that motivational factors are more influential to UFOV performance than computer experience. Our findings support the hypothesis that computer use is related to cognitive performance, and this relationship is not stronger for computerized cognitive measures. Implications and directions for future research are provided. PMID:22929395

  1. The predictive validity of the BioMedical Admissions Test for pre-clinical examination performance.

    PubMed

    Emery, Joanne L; Bell, John F

    2009-06-01

    Some medical courses in the UK have many more applicants than places and almost all applicants have the highest possible previous and predicted examination grades. The BioMedical Admissions Test (BMAT) was designed to assist in the student selection process specifically for a number of 'traditional' medical courses with clear pre-clinical and clinical phases and a strong focus on science teaching in the early years. It is intended to supplement the information provided by examination results, interviews and personal statements. This paper reports on the predictive validity of the BMAT and its predecessor, the Medical and Veterinary Admissions Test. Results from the earliest 4 years of the test (2000-2003) were matched to the pre-clinical examination results of those accepted onto the medical course at the University of Cambridge. Correlation and logistic regression analyses were performed for each cohort. Section 2 of the test ('Scientific Knowledge') correlated more strongly with examination marks than did Section 1 ('Aptitude and Skills'). It also had a stronger relationship with the probability of achieving the highest examination class. The BMAT and its predecessor demonstrate predictive validity for the pre-clinical years of the medical course at the University of Cambridge. The test identifies important differences in skills and knowledge between candidates, not shown by their previous attainment, which predict their examination performance. It is thus a valid source of additional admissions information for medical courses with a strong scientific emphasis when previous attainment is very high.

  2. THE ADOLESCENT MEASURE OF CONFIDENCE AND MUSCULOSKELETAL PERFORMANCE (AMCAMP): DEVELOPMENT AND INITIAL VALIDATION

    PubMed Central

    May, Keith H.; Edwards, Michael C.; Goldstein, Marc S.

    2016-01-01

    Background Although the relationship of self-efficacy to sports performance is well established, little attention has been paid to self-efficacy in the movements or actions that are required to perform daily activities and prepare the individual to resume sports participation following an injury and associated period of rehabilitation. There are no instruments to measure self-confidence in movement validated in an adolescent population. Purpose The purpose of this paper is to report on the development of the AMCaMP, a self-report measure of confidence in movement and provide some initial evidence to support its use as a measure of confidence in movement. Methods The AMCaMP was adapted from OPTIMAL, a self-report instrument that measures confidence in movement, which had been previously designed and validated in an adult population. Data were collected from 1,115 adolescent athletes from 12 outpatient physical therapy clinics in a single healthcare system. Results Exploratory factor analysis of the 22 items of the AMCaMP using a test sample revealed a three factor structure (trunk, lower body, upper body). Confirmatory factor analysis using a validation sample demonstrated a similar model fit with the data. Reliability of scores on each of three clusters of items identified by factor analysis was assessed with coefficient alpha (range = 0.82 to 0.94), Standard Error of Measurement (1.38 to 2.74), and Minimum Detectable Change (3.83 to 7.6). Conclusions AMCaMP has acceptable psychometric properties for use in adolescents (ages 11 to 18) as a patient-centric outcome measure of confidence in movement abilities after rehabilitation. Level of Evidence IV PMID:27757282

  3. Lightweight ZERODUR: Validation of Mirror Performance and Mirror Modeling Predictions

    NASA Technical Reports Server (NTRS)

    Hull, Tony; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron

    2017-01-01

    Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA's XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2 m diameter, f/1.2988% lightweighted SCHOTT lightweighted ZERODUR(TradeMark) mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR(TradeMark). In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response(dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR(TradeMark) mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS). Summarize the outcome of NASA's XRCF tests and model validations

  4. Lightweight ZERODUR®: Validation of mirror performance and mirror modeling predictions

    NASA Astrophysics Data System (ADS)

    Hull, Anthony B.; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron

    2017-01-01

    Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA’s XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2m diameter, f/1.29 88% lightweighted SCHOTT lightweighted ZERODUR® mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR®. In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response (dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR® mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS), and summarize the outcome of NASA’s XRCF tests and model validations.

  5. PSI-Center Validation Studies

    NASA Astrophysics Data System (ADS)

    Nelson, B. A.; Akcay, C.; Glasser, A. H.; Hansen, C. J.; Jarboe, T. R.; Marklin, G. J.; Milroy, R. D.; Morgan, K. D.; Norgaard, P. C.; Shumlak, U.; Sutherland, D. A.; Victor, B. S.; Sovinec, C. R.; O'Bryan, J. B.; Held, E. D.; Ji, J.-Y.; Lukin, V. S.

    2014-10-01

    The Plasma Science and Innovation Center (PSI-Center - http://www.psicenter.org) supports collaborating validation platform experiments with 3D extended MHD simulations using the NIMROD, HiFi, and PSI-TET codes. Collaborators include the Bellan Plasma Group (Caltech), CTH (Auburn U), HBT-EP (Columbia), HIT-SI (U Wash-UW), LTX (PPPL), MAST (Culham), Pegasus (U Wisc-Madison), SSX (Swarthmore College), TCSU (UW), and ZaP/ZaP-HD (UW). The PSI-Center is exploring application of validation metrics between experimental data and simulations results. Biorthogonal decomposition (BOD) is used to compare experiments with simulations. BOD separates data sets into spatial and temporal structures, giving greater weight to dominant structures. Several BOD metrics are being formulated with the goal of quantitive validation. Results from these simulation and validation studies, as well as an overview of the PSI-Center status will be presented.

  6. Spanish and Catalan translation, cultural adaptation and validation of the Picker Patient Experience Questionnaire-15.

    PubMed

    Bertran, M J; Viñarás, M; Salamero, M; Garcia, F; Graham, C; McCulloch, A; Escarrabill, J

    To develop and test a culturally adapted core set of questions to measure patients' experience after in-patient care. Following the methodology recommended by international guides, a basic set of patient experience questions, selected from Picker Institute Europe questionnaires (originally in English), was translated to Spanish and Catalan. Acceptability, construct validity and reliability of the adapted questionnaire were assessed via a cross-sectional validation study. The inclusion criteria were patients aged >18 years, discharged within one week to one month prior to questionnaire sending and whose email was available. Day cases, emergency department patients and deaths were excluded. Invitations were sent by email (N=876) and questionnaire was fulfilled through an online platform. An automatic reminder was sent 5 days later to non-respondents. A questionnaire, in Spanish and Catalan, with adequate conceptual and linguistic equivalence was obtained. Response rate was 44.4% (389 responses). The correlation matrix was factorable. Four factors were extracted with Parallel Analysis, which explained 43% of the total variance. First factor: information and communication received during discharge. Second factor: low sensitivity attitudes of professionals. Third factor: assessment of communication of medical and nursing staff. Fourth factor: global items. The value of the Cronbach alpha was 0.84, showing a high internal consistency. The obtained experience patient questionnaire, in Spanish and Catalan, shows good results in the psychometric properties evaluated and could be a useful tool to identify opportunities for health care improvement in our context. Email could become a feasible tool for greater patient participation in everything that concerns his health. Copyright © 2018 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.

  7. Performance testing of a vertical Bridgman furnace using experiments and numerical modeling

    NASA Astrophysics Data System (ADS)

    Rosch, W. R.; Fripp, A. L.; Debnam, W. J.; Pendergrass, T. K.

    1997-04-01

    This paper details a portion of the work performed in preparation for the growth of lead tin telluride crystals during a Space Shuttle flight. A coordinated effort of experimental measurements and numerical modeling was completed to determine the optimum growth parameters and the performance of the furnace. This work was done using NASA's Advanced Automated Directional Solidification Furnace, but the procedures used should be equally valid for other vertical Bridgman furnaces.

  8. Competency-based assessment in surgeon-performed head and neck ultrasonography: A validity study.

    PubMed

    Todsen, Tobias; Melchiors, Jacob; Charabi, Birgitte; Henriksen, Birthe; Ringsted, Charlotte; Konge, Lars; von Buchwald, Christian

    2018-06-01

    Head and neck ultrasonography (HNUS) increasingly is used as a point-of-care diagnostic tool by otolaryngologists. However, ultrasonography (US) is a very operator-dependent image modality. Hence, this study aimed to explore the diagnostic accuracy of surgeon-performed HNUS and to establish validity evidence for an objective structured assessment of ultrasound skills (OSAUS) used for competency-based assessment. A prospective experimental study. Six otolaryngologists and 11 US novices were included in a standardized test setup for which they had to perform focused HNUS of eight patients suspected for different head and neck lesions. Their diagnostic accuracy was calculated based on the US reports, and two blinded raters assessed the video-recorded US performance using the OSAUS scale. The otolaryngologists obtained a high diagnostic accuracy on 88% (range 63%-100%) compared to the US novices on 38% (range 0-63%); P < 0.001. The OSAUS score demonstrated good inter-case reliability (0.85) and inter-rater reliability (0.76), and significant discrimination between otolaryngologist and US novices; P < 0.001. A strong correlation between the OSAUS score and the diagnostic accuracy was found (Spearman's ρ, 0.85; P < P 0.001), and a pass/fail score was established at 2.8. Strong validity evidence supported the use of the OSAUS scale to assess HNUS competence with good reliability, significant discrimination between US competence levels, and a strong correlation of assessment score to diagnostic accuracy. An OSAUS pass/fail score was established and could be used for competence-based assessment in surgeon-performed HNUS. NA. Laryngoscope, 128:1346-1352, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  9. Offshore Radiation Observations for Climate Research at the CERES Ocean Validation Experiment

    NASA Technical Reports Server (NTRS)

    Rutledge, Charles K.; Schuster, Gregory L.; Charlock, Thomas P.; Denn, Frederick M.; Smith, William L., Jr.; Fabbri, Bryan E.; Madigan, James J., Jr.; Knapp, Robert J.

    2006-01-01

    When radiometers on a satellite are pointed towards the planet with the goal of understanding a phenomenon quantitatively, rather than just creating a pleasing image, the task at hand is often problematic. The signal at the detector can be affected by scattering, absorption, and emission; and these can be due to atmospheric constituents (gases, clouds, and aerosols), the earth's surface, and subsurface features. When targeting surface phenomena, the remote sensing algorithm needs to account for the radiation associated with the atmospheric constituents. Likewise, one needs to correct for the radiation leaving the surface, when atmospheric phenomena are of interest. Rigorous validation of such remote sensing products is a real challenge. In visible and near infrared wavelengths, the jumble of effects on atmospheric radiation are best accomplished over dark surfaces with fairly uniform reflective properties (spatial homogeneity) in the satellite instrument's field of view (FOV). The ocean's surface meets this criteria; land surfaces - which are brighter, more spatially inhomogeneous, and more changeable with time - generally do not. NASA's Clouds and the Earth's Radiant Energy System (CERES) project has used this backdrop to establish a radiation monitoring site in Virginia's coastal Atlantic Ocean. The project, called the CERES Ocean Validation Experiment (COVE), is located on a rigid ocean platform allowing the accurate measurement of radiation parameters that require precise leveling and pointing unavailable from ships or buoys. The COVE site is an optimal location for verifying radiative transfer models and remote sensing algorithms used in climate research; because of the platform's small size, there are no island wake effects; and suites of sensors can be simultaneously trained both on the sky and directly on ocean itself. This paper describes the site, the types of measurements made, multiple years of atmospheric and ocean surface radiation observations, and

  10. A CFD validation roadmap for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Marvin, Joseph G.

    1992-01-01

    A roadmap for computational fluid dynamics (CFD) code validation is developed. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments would provide the needed validation data.

  11. A CFD validation roadmap for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Marvin, Joseph G.

    1993-01-01

    A roadmap for computational fluid dynamics (CFD) code validation is developed. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments would provide the needed validation data.

  12. EOS Terra Validation Program

    NASA Technical Reports Server (NTRS)

    Starr, David

    1999-01-01

    The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include ASTER, CERES, MISR, MODIS and MOPITT. In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS), AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2, though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra mission will be described with emphasis on derived geophysical parameters of most relevance to the atmospheric radiation community. Detailed information about the EOS Terra validation Program can be found on the EOS Validation program

  13. Development and Validation of a Mathematics Anxiety Scale for Students

    ERIC Educational Resources Information Center

    Ko, Ho Kyoung; Yi, Hyun Sook

    2011-01-01

    This study developed and validated a Mathematics Anxiety Scale for Students (MASS) that can be used to measure the level of mathematics anxiety that students experience in school settings and help them overcome anxiety and perform better in mathematics achievement. We conducted a series of preliminary analyses and panel reviews to evaluate quality…

  14. An Investigation of Experiment Designs for Applications in Biofeedback-Performance Research Methodologies.

    DTIC Science & Technology

    1980-09-01

    used to accomplish the necessary research . One such experi- ment design and its relationship to validity will be explored next. Nonequivalent Control ...interpreting the results. The non- equivalent control group design is of the quasi-experimental variety and is widely used in educational research . As...biofeed- back research literature is the controlled group outcome study. This design has also been discussed in Chapter III in two forms as the

  15. Further Validation of a CFD Code for Calculating the Performance of Two-Stage Light Gas Guns

    NASA Technical Reports Server (NTRS)

    Bogdanoff, David W.

    2017-01-01

    Earlier validations of a higher-order Godunov code for modeling the performance of two-stage light gas guns are reviewed. These validation comparisons were made between code predictions and experimental data from the NASA Ames 1.5" and 0.28" guns and covered muzzle velocities of 6.5 to 7.2 km/s. In the present report, five more series of code validation comparisons involving experimental data from the Ames 0.22" (1.28" pump tube diameter), 0.28", 0.50", 1.00" and 1.50" guns are presented. The total muzzle velocity range of the validation data presented herein is 3 to 11.3 km/s. The agreement between the experimental data and CFD results is judged to be very good. Muzzle velocities were predicted within 0.35 km/s for 74% of the cases studied with maximum differences being 0.5 km/s and for 4 out of 50 cases, 0.5 - 0.7 km/s.

  16. Strain gauge validation experiments for the Sandia 34-meter VAWT (Vertical Axis Wind Turbine) test bed

    NASA Astrophysics Data System (ADS)

    Sutherland, Herbert J.

    1988-08-01

    Sandia National Laboratories has erected a research oriented, 34- meter diameter, Darrieus vertical axis wind turbine near Bushland, Texas. This machine, designated the Sandia 34-m VAWT Test Bed, is equipped with a large array of strain gauges that have been placed at critical positions about the blades. This manuscript details a series of four-point bend experiments that were conducted to validate the output of the blade strain gauge circuits. The output of a particular gauge circuit is validated by comparing its output to equivalent gauge circuits (in this stress state) and to theoretical predictions. With only a few exceptions, the difference between measured and predicted strain values for a gauge circuit was found to be of the order of the estimated repeatability for the measurement system.

  17. Validity Evidence in Scale Development: The Application of Cross Validation and Classification-Sequencing Validation

    ERIC Educational Resources Information Center

    Acar, Tu¨lin

    2014-01-01

    In literature, it has been observed that many enhanced criteria are limited by factor analysis techniques. Besides examinations of statistical structure and/or psychological structure, such validity studies as cross validation and classification-sequencing studies should be performed frequently. The purpose of this study is to examine cross…

  18. Early Childhood Practitioner Judgments of the Social Validity of Performance Checklists and Parent Practice Guides

    ERIC Educational Resources Information Center

    Dunst, Carl J.

    2017-01-01

    Findings from three field tests evaluations of early childhood intervention practitioner performance checklists and three parent practice guides are reported. Forty-two practitioners from three early childhood intervention programs reviewed the checklists and practice guides and made (1) social validity judgments of both products, (2) judgments of…

  19. Development of an ultra high performance liquid chromatography method for determining triamcinolone acetonide in hydrogels using the design of experiments/design space strategy in combination with process capability index.

    PubMed

    Oliva, Alexis; Monzón, Cecilia; Santoveña, Ana; Fariña, José B; Llabrés, Matías

    2016-07-01

    An ultra high performance liquid chromatography method was developed and validated for the quantitation of triamcinolone acetonide in an injectable ophthalmic hydrogel to determine the contribution of analytical method error in the content uniformity measurement. During the development phase, the design of experiments/design space strategy was used. For this, the free R-program was used as a commercial software alternative, a fast efficient tool for data analysis. The process capability index was used to find the permitted level of variation for each factor and to define the design space. All these aspects were analyzed and discussed under different experimental conditions by the Monte Carlo simulation method. Second, a pre-study validation procedure was performed in accordance with the International Conference on Harmonization guidelines. The validated method was applied for the determination of uniformity of dosage units and the reasons for variability (inhomogeneity and the analytical method error) were analyzed based on the overall uncertainty. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Validation of a novel virtual reality simulator for robotic surgery.

    PubMed

    Schreuder, Henk W R; Persson, Jan E U; Wolswijk, Richard G H; Ihse, Ingmar; Schijven, Marlies P; Verheijen, René H M

    2014-01-01

    With the increase in robotic-assisted laparoscopic surgery there is a concomitant rising demand for training methods. The objective was to establish face and construct validity of a novel virtual reality simulator (dV-Trainer, Mimic Technologies, Seattle, WA) for the use in training of robot-assisted surgery. A comparative cohort study was performed. Participants (n = 42) were divided into three groups according to their robotic experience. To determine construct validity, participants performed three different exercises twice. Performance parameters were measured. To determine face validity, participants filled in a questionnaire after completion of the exercises. Experts outperformed novices in most of the measured parameters. The most discriminative parameters were "time to complete" and "economy of motion" (P < 0.001). The training capacity of the simulator was rated 4.6 ± 0.5 SD on a 5-point Likert scale. The realism of the simulator in general, visual graphics, movements of instruments, interaction with objects, and the depth perception were all rated as being realistic. The simulator is considered to be a very useful training tool for residents and medical specialist starting with robotic surgery. Face and construct validity for the dV-Trainer could be established. The virtual reality simulator is a useful tool for training robotic surgery.

  1. Validation of a unique concept for a low-cost, lightweight space-deployable antenna structure

    NASA Technical Reports Server (NTRS)

    Freeland, R. E.; Bilyeu, G. D.; Veal, G. R.

    1993-01-01

    An experiment conducted in the framework of a NASA In-Space Technology Experiments Program based on a concept of inflatable deployable structures is described. The concept utilizes very low inflation pressure to maintain the required geometry on orbit and gravity-induced deflection of the structure precludes any meaningful ground-based demonstrations of functions performance. The experiment is aimed at validating and characterizing the mechanical functional performance of a 14-m-diameter inflatable deployable reflector antenna structure in the orbital operational environment. Results of the experiment are expected to significantly reduce the user risk associated with using large space-deployable antennas by demonstrating the functional performance of a concept that meets the criteria for low-cost, lightweight, and highly reliable space-deployable structures.

  2. Changing abilities vs. changing tasks: Examining validity degradation with test scores and college performance criteria both assessed longitudinally.

    PubMed

    Dahlke, Jeffrey A; Kostal, Jack W; Sackett, Paul R; Kuncel, Nathan R

    2018-05-03

    We explore potential explanations for validity degradation using a unique predictive validation data set containing up to four consecutive years of high school students' cognitive test scores and four complete years of those students' college grades. This data set permits analyses that disentangle the effects of predictor-score age and timing of criterion measurements on validity degradation. We investigate the extent to which validity degradation is explained by criterion dynamism versus the limited shelf-life of ability scores. We also explore whether validity degradation is attributable to fluctuations in criterion variability over time and/or GPA contamination from individual differences in course-taking patterns. Analyses of multiyear predictor data suggest that changes to the determinants of performance over time have much stronger effects on validity degradation than does the shelf-life of cognitive test scores. The age of predictor scores had only a modest relationship with criterion-related validity when the criterion measurement occasion was held constant. Practical implications and recommendations for future research are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Construct Validity and Scoring Methods of the World Health Organization: Health and Work Performance Questionnaire Among Workers With Arthritis and Rheumatological Conditions.

    PubMed

    AlHeresh, Rawan; LaValley, Michael P; Coster, Wendy; Keysor, Julie J

    2017-06-01

    To evaluate construct validity and scoring methods of the world health organization-health and work performance questionnaire (HPQ) for people with arthritis. Construct validity was examined through hypothesis testing using the recommended guidelines of the consensus-based standards for the selection of health measurement instruments (COSMIN). The HPQ using the absolute scoring method showed moderate construct validity as four of the seven hypotheses were met. The HPQ using the relative scoring method had weak construct validity as only one of the seven hypotheses were met. The absolute scoring method for the HPQ is superior in construct validity to the relative scoring method in assessing work performance among people with arthritis and related rheumatic conditions; however, more research is needed to further explore other psychometric properties of the HPQ.

  4. CFD Validation with Experiment and Verification with Physics of a Propellant Damping Device

    NASA Technical Reports Server (NTRS)

    Yang, H. Q.; Peugeot, John

    2011-01-01

    This paper will document our effort in validating a coupled fluid-structure interaction CFD tool in predicting a damping device performance in the laboratory condition. Consistently good comparisons of "blind" CFD predictions against experimental data under various operation conditions, design parameters, and cryogenic environment will be presented. The power of the coupled CFD-structures interaction code in explaining some unexpected phenomena of the device observed during the technology development will be illustrated. The evolution of the damper device design inside the LOX tank will be used to demonstrate the contribution of the tool in understanding, optimization and implementation of LOX damper in Ares I vehicle. It is due to the present validation effort, the LOX damper technology has matured to TRL 5. The present effort has also contributed to the transition of the technology from an early conceptual observation to the baseline design of thrust oscillation mitigation for the Ares I within a 10 month period.

  5. Measuring Black men's police-based discrimination experiences: Development and validation of the Police and Law Enforcement (PLE) Scale.

    PubMed

    English, Devin; Bowleg, Lisa; Del Río-González, Ana Maria; Tschann, Jeanne M; Agans, Robert P; Malebranche, David J

    2017-04-01

    Although social science research has examined police and law enforcement-perpetrated discrimination against Black men using policing statistics and implicit bias studies, there is little quantitative evidence detailing this phenomenon from the perspective of Black men. Consequently, there is a dearth of research detailing how Black men's perspectives on police and law enforcement-related stress predict negative physiological and psychological health outcomes. This study addresses these gaps with the qualitative development and quantitative test of the Police and Law Enforcement (PLE) Scale. In Study 1, we used thematic analysis on transcripts of individual qualitative interviews with 90 Black men to assess key themes and concepts and develop quantitative items. In Study 2, we used 2 focus groups comprised of 5 Black men each (n = 10), intensive cognitive interviewing with a separate sample of Black men (n = 15), and piloting with another sample of Black men (n = 13) to assess the ecological validity of the quantitative items. For Study 3, we analyzed data from a sample of 633 Black men between the ages of 18 and 65 to test the factor structure of the PLE, as we all as its concurrent validity and convergent/discriminant validity. Qualitative analyses and confirmatory factor analyses suggested that a 5-item, 1-factor measure appropriately represented respondents' experiences of police/law enforcement discrimination. As hypothesized, the PLE was positively associated with measures of racial discrimination and depressive symptoms. Preliminary evidence suggests that the PLE is a reliable and valid measure of Black men's experiences of discrimination with police/law enforcement. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. The Depressive Experiences Questionnaire: validity and psychological correlates in a clinical sample.

    PubMed

    Riley, W T; McCranie, E W

    1990-01-01

    This study sought to compare the original and revised scoring systems of the Depressive Experiences Questionnaire (DEQ) and to assess the construct validity of the Dependent and Self-Critical subscales of the DEQ in a clinically depressed sample. Subjects were 103 depressed inpatients who completed the DEQ, the Beck Depression Inventory (BDI), the Hopelessness Scale, the Automatic Thoughts Questionnaire (ATQ), the Rathus Assertiveness Schedule (RAS), and the Minnesota Multiphasic Personality Inventory (MMPI). The original and revised scoring systems of the DEQ evidenced good concurrent validity for each factor scale, but the revised system did not sufficiently discriminate dependent and self-critical dimensions. Using the original scoring system, self-criticism was significantly and positively related to severity of depression, whereas dependency was not, particularly for males. Factor analysis of the DEQ scales and the other scales used in this study supported the dependent and self-critical dimensions. For men, the correlation of the DEQ with the MMPI scales indicated that self-criticism was associated with psychotic symptoms, hostility/conflict, and a distress/exaggerated response set, whereas dependency did not correlate significantly with any MMPI scales. Females, however, did not exhibit a differential pattern of correlations between either the Dependency or the Self-Criticism scales and the MMPI. These findings suggest possible gender differences in the clinical characteristics of male and female dependent and self-critical depressive subtypes.

  7. Improvements in the simulation code of the SOX experiment

    NASA Astrophysics Data System (ADS)

    Caminata, A.; Agostini, M.; Altenmüeller, K.; Appel, S.; Atroshchenko, V.; Bellini, G.; Benziger, J.; Bick, D.; Bonfini, G.; Bravo, D.; Caccianiga, B.; Calaprice, F.; Carlini, M.; Cavalcante, P.; Chepurnov, A.; Choi, K.; Cribier, M.; D'Angelo, D.; Davini, S.; Derbin, A.; Di Noto, L.; Drachnev, I.; Durero, M.; Etenko, A.; Farinon, S.; Fischer, V.; Fomenko, K.; Franco, D.; Gabriele, F.; Gaffiot, J.; Galbiati, C.; Gschwender, M.; Ghiano, C.; Giammarchi, M.; Goeger-Neff, M.; Goretti, A.; Gromov, M.; Hagner, C.; Houdy, Th.; Hungerford, E.; Ianni, Aldo; Ianni, Andrea; Jonquères, N.; Jany, A.; Jedrzejczak, K.; Jeschke, D.; Kobychev, V.; Korablev, D.; Korga, G.; Kornoukhov, V.; Kryn, D.; Lachenmaier, T.; Lasserre, T.; Laubenstein, M.; Lehnert, B.; Link, J.; Litvinovich, E.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Machulin, I.; Manecki, S.; Maneschg, W.; Manuzio, G.; Marcocci, S.; Maricic, J.; Mention, G.; Meroni, E.; Meyer, M.; Miramonti, L.; Misiaszek, M.; Montuschi, M.; Mosteiro, P.; Muratova, V.; Musenich, R.; Neumair, B.; Oberauer, L.; Obolensky, M.; Ortica, F.; Pallavicini, M.; Papp, L.; Pocar, A.; Ranucci, G.; Razeto, A.; Re, A.; Romani, A.; Roncin, R.; Rossi, N.; Schönert, S.; Scola, L.; Semenov, D.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Sukhotin, S.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Thurn, J.; Toropova, M.; Unzhakov, E.; Veyssiére, C.; Vishneva, A.; Vivier, M.; Vogelaar, R. B.; von Feilitzsch, F.; Wang, H.; Weinz, S.; Winter, J.; Wojcik, M.; Wurm, M.; Yokley, Z.; Zaimidoroga, O.; Zavatarelli, S.; Zuber, K.; Zuzel, G.

    2017-09-01

    The aim of the SOX experiment is to test the hypothesis of existence of light sterile neutrinos trough a short baseline experiment. Electron antineutrinos will be produced by an high activity source and detected in the Borexino experiment. Both an oscillometry approach and a conventional disappearance analysis will be performed and, if combined, SOX will be able to investigate most of the anomaly region at 95% c.l. This paper focuses on the improvements performed on the simulation code and on the techniques (calibrations) used to validate the results.

  8. Cancer patient experience, hospital performance and case mix: evidence from England.

    PubMed

    Abel, Gary A; Saunders, Catherine L; Lyratzopoulos, Georgios

    2014-01-01

      This study aims to explore differences between crude and case mix-adjusted estimates of hospital performance with respect to the experience of cancer patients. This study analyzed the English 2011/2012 Cancer Patient Experience Survey covering all English National Health Service hospitals providing cancer treatment (n = 160). Logistic regression analysis was used to predict hospital performance for each of the 64 evaluative questions, adjusting for age, gender, ethnic group and cancer diagnosis. The degree of reclassification was explored across three categories (bottom 20%, middle 60% and top 20% of hospitals). There was high concordance between crude and adjusted ranks of hospitals (median Kendall's τ = 0.84; interquartile range: 0.82-0.88). Across all questions, a median of 5.0% (eight) of hospitals (interquartile range: 3.8-6.4%; six to ten hospitals) moved out of the extreme performance categories after case mix adjustment. In this context, patient case mix has only a small impact on measured hospital performance for cancer patient experience.

  9. The Multifactor Measure of Performance: Its Development, Norming, and Validation.

    PubMed

    Bar-On, Reuven

    2018-01-01

    This article describes the development as well as the initial norming and validation of the Multifactor Measure of Performance™ (MMP™), which is a psychometric instrument that is designed to study, assess and enhance key predictors of human performance to help individuals perform at a higher level. It was created by the author, for the purpose of going beyond existing conceptual and psychometric models that often focus on relatively few factors that are purported to assess performance at school, in the workplace and elsewhere. The relative sparsity of multifactorial pre-employment assessment instruments exemplifies, for the author, one of the important reasons for developing the MMP™, which attempts to comprehensively evaluate a wider array of factors that are thought to contribute to performance. In that this situation creates a need in the area of test-construction that should be addressed, the author sought to develop a multifactorial assessment and development instrument that could concomitantly evaluate a combination of physical, cognitive, intra-personal, inter-personal, and motivational factors that significantly contribute to performance. The specific aim of this article is to show why, how and if this could be done as well as to present and discuss the potential importance of the results obtained to date. The findings presented here will hopefully add to what is known about human performance and thus contribute to the professional literature, in addition to contribute to the continued development of the MMP™. The impetus for developing the MMP™ is first explained below, followed by a detailed description of the process involved and the findings obtained; and their potential application is then discussed as well as the possible limitations of the present research and the need for future studies to address them.

  10. The Multifactor Measure of Performance: Its Development, Norming, and Validation

    PubMed Central

    Bar-On, Reuven

    2018-01-01

    This article describes the development as well as the initial norming and validation of the Multifactor Measure of Performance™ (MMP™)1, which is a psychometric instrument that is designed to study, assess and enhance key predictors of human performance to help individuals perform at a higher level. It was created by the author, for the purpose of going beyond existing conceptual and psychometric models that often focus on relatively few factors that are purported to assess performance at school, in the workplace and elsewhere. The relative sparsity of multifactorial pre-employment assessment instruments exemplifies, for the author, one of the important reasons for developing the MMP™, which attempts to comprehensively evaluate a wider array of factors that are thought to contribute to performance. In that this situation creates a need in the area of test-construction that should be addressed, the author sought to develop a multifactorial assessment and development instrument that could concomitantly evaluate a combination of physical, cognitive, intra-personal, inter-personal, and motivational factors that significantly contribute to performance. The specific aim of this article is to show why, how and if this could be done as well as to present and discuss the potential importance of the results obtained to date. The findings presented here will hopefully add to what is known about human performance and thus contribute to the professional literature, in addition to contribute to the continued development of the MMP™. The impetus for developing the MMP™ is first explained below, followed by a detailed description of the process involved and the findings obtained; and their potential application is then discussed as well as the possible limitations of the present research and the need for future studies to address them. PMID:29515479

  11. Constructing and Validating High-Performance MIEC-SVM Models in Virtual Screening for Kinases: A Better Way for Actives Discovery.

    PubMed

    Sun, Huiyong; Pan, Peichen; Tian, Sheng; Xu, Lei; Kong, Xiaotian; Li, Youyong; Dan Li; Hou, Tingjun

    2016-04-22

    The MIEC-SVM approach, which combines molecular interaction energy components (MIEC) derived from free energy decomposition and support vector machine (SVM), has been found effective in capturing the energetic patterns of protein-peptide recognition. However, the performance of this approach in identifying small molecule inhibitors of drug targets has not been well assessed and validated by experiments. Thereafter, by combining different model construction protocols, the issues related to developing best MIEC-SVM models were firstly discussed upon three kinase targets (ABL, ALK, and BRAF). As for the investigated targets, the optimized MIEC-SVM models performed much better than the models based on the default SVM parameters and Autodock for the tested datasets. Then, the proposed strategy was utilized to screen the Specs database for discovering potential inhibitors of the ALK kinase. The experimental results showed that the optimized MIEC-SVM model, which identified 7 actives with IC50 < 10 μM from 50 purchased compounds (namely hit rate of 14%, and 4 in nM level) and performed much better than Autodock (3 actives with IC50 < 10 μM from 50 purchased compounds, namely hit rate of 6%, and 2 in nM level), suggesting that the proposed strategy is a powerful tool in structure-based virtual screening.

  12. Validity of the Optometry Admission Test in Predicting Performance in Schools and Colleges of Optometry.

    ERIC Educational Resources Information Center

    Kramer, Gene A.; Johnston, JoElle

    1997-01-01

    A study examined the relationship between Optometry Admission Test scores and pre-optometry or undergraduate grade point average (GPA) with first and second year performance in optometry schools. The test's predictive validity was limited but significant, and comparable to those reported for other admission tests. In addition, the scores…

  13. Towards a full integration of optimization and validation phases: An analytical-quality-by-design approach.

    PubMed

    Hubert, C; Houari, S; Rozet, E; Lebrun, P; Hubert, Ph

    2015-05-22

    When using an analytical method, defining an analytical target profile (ATP) focused on quantitative performance represents a key input, and this will drive the method development process. In this context, two case studies were selected in order to demonstrate the potential of a quality-by-design (QbD) strategy when applied to two specific phases of the method lifecycle: the pre-validation study and the validation step. The first case study focused on the improvement of a liquid chromatography (LC) coupled to mass spectrometry (MS) stability-indicating method by the means of the QbD concept. The design of experiments (DoE) conducted during the optimization step (i.e. determination of the qualitative design space (DS)) was performed a posteriori. Additional experiments were performed in order to simultaneously conduct the pre-validation study to assist in defining the DoE to be conducted during the formal validation step. This predicted protocol was compared to the one used during the formal validation. A second case study based on the LC/MS-MS determination of glucosamine and galactosamine in human plasma was considered in order to illustrate an innovative strategy allowing the QbD methodology to be incorporated during the validation phase. An operational space, defined by the qualitative DS, was considered during the validation process rather than a specific set of working conditions as conventionally performed. Results of all the validation parameters conventionally studied were compared to those obtained with this innovative approach for glucosamine and galactosamine. Using this strategy, qualitative and quantitative information were obtained. Consequently, an analyst using this approach would be able to select with great confidence several working conditions within the operational space rather than a given condition for the routine use of the method. This innovative strategy combines both a learning process and a thorough assessment of the risk involved

  14. Comparison of airborne passive and active L-band System (PALS) brightness temperature measurements to SMOS observations during the SMAP validation experiment 2012 (SMAPVEX12)

    USDA-ARS?s Scientific Manuscript database

    The purpose of SMAP (Soil Moisture Active Passive) Validation Experiment 2012 (SMAPVEX12) campaign was to collect data for the pre-launch development and validation of SMAP soil moisture algorithms. SMAP is a National Aeronautics and Space Administration’s (NASA) satellite mission designed for the m...

  15. Cross-Modal Interactions in the Experience of Musical Performances: Physiological Correlates

    ERIC Educational Resources Information Center

    Chapados, Catherine; Levitin, Daniel J.

    2008-01-01

    This experiment was conducted to investigate cross-modal interactions in the emotional experience of music listeners. Previous research showed that visual information present in a musical performance is rich in expressive content, and moderates the subjective emotional experience of a participant listening and/or observing musical stimuli [Vines,…

  16. U.S. perspective on technology demonstration experiments for adaptive structures

    NASA Technical Reports Server (NTRS)

    Aswani, Mohan; Wada, Ben K.; Garba, John A.

    1991-01-01

    Evaluation of design concepts for adaptive structures is being performed in support of several focused research programs. These include programs such as Precision Segmented Reflector (PSR), Control Structure Interaction (CSI), and the Advanced Space Structures Technology Research Experiment (ASTREX). Although not specifically designed for adaptive structure technology validation, relevant experiments can be performed using the Passive and Active Control of Space Structures (PACOSS) testbed, the Space Integrated Controls Experiment (SPICE), the CSI Evolutionary Model (CEM), and the Dynamic Scale Model Test (DSMT) Hybrid Scale. In addition to the ground test experiments, several space flight experiments have been planned, including a reduced gravity experiment aboard the KC-135 aircraft, shuttle middeck experiments, and the Inexpensive Flight Experiment (INFLEX).

  17. Nuclear Energy Knowledge and Validation Center (NEKVaC) Needs Workshop Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gougar, Hans

    2015-02-01

    The Department of Energy (DOE) has made significant progress developing simulation tools to predict the behavior of nuclear systems with greater accuracy and of increasing our capability to predict the behavior of these systems outside of the standard range of applications. These analytical tools require a more complex array of validation tests to accurately simulate the physics and multiple length and time scales. Results from modern simulations will allow experiment designers to narrow the range of conditions needed to bound system behavior and to optimize the deployment of instrumentation to limit the breadth and cost of the campaign. Modern validation,more » verification and uncertainty quantification (VVUQ) techniques enable analysts to extract information from experiments in a systematic manner and provide the users with a quantified uncertainty estimate. Unfortunately, the capability to perform experiments that would enable taking full advantage of the formalisms of these modern codes has progressed relatively little (with some notable exceptions in fuels and thermal-hydraulics); the majority of the experimental data available today is the "historic" data accumulated over the last decades of nuclear systems R&D. A validated code-model is a tool for users. An unvalidated code-model is useful for code developers to gain understanding, publish research results, attract funding, etc. As nuclear analysis codes have become more sophisticated, so have the measurement and validation methods and the challenges that confront them. A successful yet cost-effective validation effort requires expertise possessed only by a few, resources possessed only by the well-capitalized (or a willing collective), and a clear, well-defined objective (validating a code that is developed to satisfy the need(s) of an actual user). To that end, the Idaho National Laboratory established the Nuclear Energy Knowledge and Validation Center to address the challenges of modern code validation

  18. Calibration and validation of toxicokinetic-toxicodynamic models for three neonicotinoids and some aquatic macroinvertebrates.

    PubMed

    Focks, Andreas; Belgers, Dick; Boerwinkel, Marie-Claire; Buijse, Laura; Roessink, Ivo; Van den Brink, Paul J

    2018-05-01

    Exposure patterns in ecotoxicological experiments often do not match the exposure profiles for which a risk assessment needs to be performed. This limitation can be overcome by using toxicokinetic-toxicodynamic (TKTD) models for the prediction of effects under time-variable exposure. For the use of TKTD models in the environmental risk assessment of chemicals, it is required to calibrate and validate the model for specific compound-species combinations. In this study, the survival of macroinvertebrates after exposure to the neonicotinoid insecticide was modelled using TKTD models from the General Unified Threshold models of Survival (GUTS) framework. The models were calibrated on existing survival data from acute or chronic tests under static exposure regime. Validation experiments were performed for two sets of species-compound combinations: one set focussed on multiple species sensitivity to a single compound: imidacloprid, and the other set on the effects of multiple compounds for a single species, i.e., the three neonicotinoid compounds imidacloprid, thiacloprid and thiamethoxam, on the survival of the mayfly Cloeon dipterum. The calibrated models were used to predict survival over time, including uncertainty ranges, for the different time-variable exposure profiles used in the validation experiments. From the comparison between observed and predicted survival, it appeared that the accuracy of the model predictions was acceptable for four of five tested species in the multiple species data set. For compounds such as neonicotinoids, which are known to have the potential to show increased toxicity under prolonged exposure, the calibration and validation of TKTD models for survival needs to be performed ideally by considering calibration data from both acute and chronic tests.

  19. The Relationship of a Pilot's Educational Background, Aeronautical Experience and Recency of Experience to Performance In Initial Training at a Regional Airline

    NASA Astrophysics Data System (ADS)

    Shane, Nancy R.

    The purpose of this study was to determine how a pilot's educational background, aeronautical experience and recency of experience relate to their performance during initial training at a regional airline. Results show that variables in pilots' educational background, aeronautical experience and recency of experience do predict performance in training. The most significant predictors include years since graduation from college, multi-engine time, total time and whether or not a pilot had military flying experience. Due to the pilot shortage, the pilots entering regional airline training classes since August 2013 have varied backgrounds, aeronautical experience and recency of experience. As explained by Edward Thorndike's law of exercise and the law of recency, pilots who are actively using their aeronautical knowledge and exercising their flying skills should exhibit strong performance in those areas and pilots who have not been actively using their aeronautical knowledge and exercising their flying skills should exhibit degraded performance in those areas. Through correlation, chi-square and multiple regression analysis, this study tests this theory as it relates to performance in initial training at a regional airline.

  20. Large/Complex Antenna Performance Validation for Spaceborne Radar/Radiometeric Instruments

    NASA Technical Reports Server (NTRS)

    Focardi, Paolo; Harrell, Jefferson; Vacchione, Joseph

    2013-01-01

    Over the past decade, Earth observing missions which employ spaceborne combined radar & radiometric instruments have been developed and implemented. These instruments include the use of large and complex deployable antennas whose radiation characteristics need to be accurately determined over 4 pisteradians. Given the size and complexity of these antennas, the performance of the flight units cannot be readily measured. In addition, the radiation performance is impacted by the presence of the instrument's service platform which cannot easily be included in any measurement campaign. In order to meet the system performance knowledge requirements, a two pronged approach has been employed. The first is to use modeling tools to characterize the system and the second is to build a scale model of the system and use RF measurements to validate the results of the modeling tools. This paper demonstrates the resulting level of agreement between scale model and numerical modeling for two recent missions: (1) the earlier Aquarius instrument currently in Earth orbit and (2) the upcoming Soil Moisture Active Passive (SMAP) mission. The results from two modeling approaches, Ansoft's High Frequency Structure Simulator (HFSS) and TICRA's General RF Applications Software Package (GRASP), were compared with measurements of approximately 1/10th scale models of the Aquarius and SMAP systems. Generally good agreement was found between the three methods but each approach had its shortcomings as will be detailed in this paper.

  1. A Supersonic Argon/Air Coaxial Jet Experiment for Computational Fluid Dynamics Code Validation

    NASA Technical Reports Server (NTRS)

    Clifton, Chandler W.; Cutler, Andrew D.

    2007-01-01

    A non-reacting experiment is described in which data has been acquired for the validation of CFD codes used to design high-speed air-breathing engines. A coaxial jet-nozzle has been designed to produce pressure-matched exit flows of Mach 1.8 at 1 atm in both a center jet of argon and a coflow jet of air, creating a supersonic, incompressible mixing layer. The flowfield was surveyed using total temperature, gas composition, and Pitot probes. The data set was compared to CFD code predictions made using Vulcan, a structured grid Navier-Stokes code, as well as to data from a previous experiment in which a He-O2 mixture was used instead of argon in the center jet of the same coaxial jet assembly. Comparison of experimental data from the argon flowfield and its computational prediction shows that the CFD produces an accurate solution for most of the measured flowfield. However, the CFD prediction deviates from the experimental data in the region downstream of x/D = 4, underpredicting the mixing-layer growth rate.

  2. Idealized gas turbine combustor for performance research and validation of large eddy simulations.

    PubMed

    Williams, Timothy C; Schefer, Robert W; Oefelein, Joseph C; Shaddix, Christopher R

    2007-03-01

    This paper details the design of a premixed, swirl-stabilized combustor that was designed and built for the express purpose of obtaining validation-quality data for the development of large eddy simulations (LES) of gas turbine combustors. The combustor features nonambiguous boundary conditions, a geometrically simple design that retains the essential fluid dynamics and thermochemical processes that occur in actual gas turbine combustors, and unrestrictive access for laser and optical diagnostic measurements. After discussing the design detail, a preliminary investigation of the performance and operating envelope of the combustor is presented. With the combustor operating on premixed methane/air, both the equivalence ratio and the inlet velocity were systematically varied and the flame structure was recorded via digital photography. Interesting lean flame blowout and resonance characteristics were observed. In addition, the combustor exhibited a large region of stable, acoustically clean combustion that is suitable for preliminary validation of LES models.

  3. The Copernicus S5P Mission Performance Centre / Validation Data Analysis Facility for TROPOMI operational atmospheric data products

    NASA Astrophysics Data System (ADS)

    Compernolle, Steven; Lambert, Jean-Christopher; Langerock, Bavo; Granville, José; Hubert, Daan; Keppens, Arno; Rasson, Olivier; De Mazière, Martine; Fjæraa, Ann Mari; Niemeijer, Sander

    2017-04-01

    Sentinel-5 Precursor (S5P), to be launched in 2017 as the first atmospheric composition satellite of the Copernicus programme, carries as payload the TROPOspheric Monitoring Instrument (TROPOMI) developed by The Netherlands in close cooperation with ESA. Designed to measure Earth radiance and solar irradiance in the ultraviolet, visible and near infrared, TROPOMI will provide Copernicus with observational data on atmospheric composition at unprecedented geographical resolution. The S5P Mission Performance Center (MPC) provides an operational service-based solution for various QA/QC tasks, including the validation of S5P Level-2 data products and the support to algorithm evolution. Those two tasks are to be accomplished by the MPC Validation Data Analysis Facility (VDAF), one MPC component developed and operated at BIRA-IASB with support from S[&]T and NILU. The routine validation to be ensured by VDAF is complemented by a list of validation AO projects carried out by ESA's S5P Validation Team (S5PVT), with whom interaction is essential. Here we will introduce the general architecture of VDAF, its relation to the other MPC components, the generic and specific validation strategies applied for each of the official TROPOMI data products, and the expected output of the system. The S5P data products to be validated by VDAF are diverse: O3 (vertical profile, total column, tropospheric column), NO2 (total and tropospheric column), HCHO (tropospheric column), SO2 (column), CO (column), CH4 (column), aerosol layer height and clouds (fractional cover, cloud-top pressure and optical thickness). Starting from a generic validation protocol meeting community-agreed standards, a set of specific validation settings is associated with each data product, as well as the appropriate set of Fiducial Reference Measurements (FRM) to which it will be compared. VDAF collects FRMs from ESA's Validation Data Centre (EVDC) and from other sources (e.g., WMO's GAW, NDACC and TCCON). Data

  4. DSMC Simulations of Hypersonic Flows and Comparison With Experiments

    NASA Technical Reports Server (NTRS)

    Moss, James N.; Bird, Graeme A.; Markelov, Gennady N.

    2004-01-01

    This paper presents computational results obtained with the direct simulation Monte Carlo (DSMC) method for several biconic test cases in which shock interactions and flow separation-reattachment are key features of the flow. Recent ground-based experiments have been performed for several biconic configurations, and surface heating rate and pressure measurements have been proposed for code validation studies. The present focus is to expand on the current validating activities for a relatively new DSMC code called DS2V that Bird (second author) has developed. Comparisons with experiments and other computations help clarify the agreement currently being achieved between computations and experiments and to identify the range of measurement variability of the proposed validation data when benchmarked with respect to the current computations. For the test cases with significant vibrational nonequilibrium, the effect of the vibrational energy surface accommodation on heating and other quantities is demonstrated.

  5. Measuring Black Men’s Police-Based Discrimination Experiences: Development and Validation of the Police and Law Enforcement (PLE) Scale

    PubMed Central

    English, Devin; Bowleg, Lisa; del Río-González, Ana Maria; Tschann, Jeanne M.; Agans, Robert; Malebranche, David J

    2017-01-01

    Objectives Although social science research has examined police and law enforcement-perpetrated discrimination against Black men using policing statistics and implicit bias studies, there is little quantitative evidence detailing this phenomenon from the perspective of Black men. Consequently, there is a dearth of research detailing how Black men’s perspectives on police and law enforcement-related stress predict negative physiological and psychological health outcomes. This study addresses these gaps with the qualitative development and quantitative test of the Police and Law Enforcement (PLE) scale. Methods In Study 1, we employed thematic analysis on transcripts of individual qualitative interviews with 90 Black men to assess key themes and concepts and develop quantitative items. In Study 2, we used 2 focus groups comprised of 5 Black men each (n=10), intensive cognitive interviewing with a separate sample of Black men (n=15), and piloting with another sample of Black men (n=13) to assess the ecological validity of the quantitative items. For study 3, we analyzed data from a sample of 633 Black men between the ages of 18 and 65 to test the factor structure of the PLE, as we all as its concurrent validity and convergent/discriminant validity. Results Qualitative analyses and confirmatory factor analyses suggested that a 5-item, 1-factor measure appropriately represented respondents’ experiences of police/law enforcement discrimination. As hypothesized, the PLE was positively associated with measures of racial discrimination and depressive symptoms. Conclusions Preliminary evidence suggests that the PLE is a reliable and valid measure of Black men’s experiences of discrimination with police/law enforcement. PMID:28080104

  6. Statistical analysis of microgravity experiment performance using the degrees of success scale

    NASA Technical Reports Server (NTRS)

    Upshaw, Bernadette; Liou, Ying-Hsin Andrew; Morilak, Daniel P.

    1994-01-01

    This paper describes an approach to identify factors that significantly influence microgravity experiment performance. Investigators developed the 'degrees of success' scale to provide a numerical representation of success. A degree of success was assigned to 293 microgravity experiments. Experiment information including the degree of success rankings and factors for analysis was compiled into a database. Through an analysis of variance, nine significant factors in microgravity experiment performance were identified. The frequencies of these factors are presented along with the average degree of success at each level. A preliminary discussion of the relationship between the significant factors and the degree of success is presented.

  7. Two-Speed Gearbox Dynamic Simulation Predictions and Test Validation

    NASA Technical Reports Server (NTRS)

    Lewicki, David G.; DeSmidt, Hans; Smith, Edward C.; Bauman, Steven W.

    2010-01-01

    Dynamic simulations and experimental validation tests were performed on a two-stage, two-speed gearbox as part of the drive system research activities of the NASA Fundamental Aeronautics Subsonics Rotary Wing Project. The gearbox was driven by two electromagnetic motors and had two electromagnetic, multi-disk clutches to control output speed. A dynamic model of the system was created which included a direct current electric motor with proportional-integral-derivative (PID) speed control, a two-speed gearbox with dual electromagnetically actuated clutches, and an eddy current dynamometer. A six degree-of-freedom model of the gearbox accounted for the system torsional dynamics and included gear, clutch, shaft, and load inertias as well as shaft flexibilities and a dry clutch stick-slip friction model. Experimental validation tests were performed on the gearbox in the NASA Glenn gear noise test facility. Gearbox output speed and torque as well as drive motor speed and current were compared to those from the analytical predictions. The experiments correlate very well with the predictions, thus validating the dynamic simulation methodologies.

  8. Continued Development and Validation of Methods for Spheromak Simulation

    NASA Astrophysics Data System (ADS)

    Benedett, Thomas

    2015-11-01

    The HIT-SI experiment has demonstrated stable sustainment of spheromaks; determining how the underlying physics extrapolate to larger, higher-temperature regimes is of prime importance in determining the viability of the inductively-driven spheromak. It is thus prudent to develop and validate a computational model that can be used to study current results and provide an intermediate step between theory and future experiments. A zero-beta Hall-MHD model has shown good agreement with experimental data at 14.5 kHz injector operation. Experimental observations at higher frequency, where the best performance is achieved, indicate pressure effects are important and likely required to attain quantitative agreement with simulations. Efforts to extend the existing validation to high frequency (~ 36-68 kHz) using an extended MHD model implemented in the PSI-TET arbitrary-geometry 3D MHD code will be presented. Results from verification of the PSI-TET extended MHD model using the GEM magnetic reconnection challenge will also be presented along with investigation of injector configurations for future SIHI experiments using Taylor state equilibrium calculations. Work supported by DoE.

  9. Validation and Continued Development of Methods for Spheromak Simulation

    NASA Astrophysics Data System (ADS)

    Benedett, Thomas

    2016-10-01

    The HIT-SI experiment has demonstrated stable sustainment of spheromaks. Determining how the underlying physics extrapolate to larger, higher-temperature regimes is of prime importance in determining the viability of the inductively-driven spheromak. It is thus prudent to develop and validate a computational model that can be used to study current results and study the effect of possible design choices on plasma behavior. A zero-beta Hall-MHD model has shown good agreement with experimental data at 14.5 kHz injector operation. Experimental observations at higher frequency, where the best performance is achieved, indicate pressure effects are important and likely required to attain quantitative agreement with simulations. Efforts to extend the existing validation to high frequency (36-68 kHz) using an extended MHD model implemented in the PSI-TET arbitrary-geometry 3D MHD code will be presented. An implementation of anisotropic viscosity, a feature observed to improve agreement between NIMROD simulations and experiment, will also be presented, along with investigations of flux conserver features and their impact on density control for future SIHI experiments. Work supported by DoE.

  10. Final Design and Experimental Validation of the Thermal Performance of the LHC Lattice Cryostats

    NASA Astrophysics Data System (ADS)

    Bourcey, N.; Capatina, O.; Parma, V.; Poncet, A.; Rohmig, P.; Serio, L.; Skoczen, B.; Tock, J.-P.; Williams, L. R.

    2004-06-01

    The recent commissioning and operation of the LHC String 2 have given a first experimental validation of the global thermal performance of the LHC lattice cryostat at nominal cryogenic conditions. The cryostat designed to minimize the heat inleak from ambient temperature, houses under vacuum and thermally protects the cold mass, which contains the LHC twin-aperture superconducting magnets operating at 1.9 K in superfluid helium. Mechanical components linking the cold mass to the vacuum vessel, such as support posts and insulation vacuum barriers are designed with efficient thermalisations for heat interception to minimise heat conduction. Heat inleak by radiation is reduced by employing multilayer insulation (MLI) wrapped around the cold mass and around an aluminium thermal shield cooled to about 60 K. Measurements of the total helium vaporization rate in String 2 gives, after substraction of supplementary heat loads and end effects, an estimate of the total thermal load to a standard LHC cell (107 m) including two Short Straight Sections and six dipole cryomagnets. Temperature sensors installed at critical locations provide a temperature mapping which allows validation of the calculated and estimated thermal performance of the cryostat components, including efficiency of the heat interceptions.

  11. The Contribution of Rubrics to the Validity of Performance Assessment: A Study of the Conservation-Restoration and Design Undergraduate Degrees

    ERIC Educational Resources Information Center

    Menéndez-Varela, José-Luis; Gregori-Giralt, Eva

    2016-01-01

    Rubrics have attained considerable importance in the authentic and sustainable assessment paradigm; nevertheless, few studies have examined their contribution to validity, especially outside the domain of educational studies. This empirical study used a quantitative approach to analyse the validity of a rubrics-based performance assessment. Raters…

  12. Performance anxiety experiences of professional ballet dancers: the importance of control.

    PubMed

    Walker, Imogen J; Nordin-Bates, Sanna M

    2010-01-01

    Performance anxiety research abounds in sport psychology, yet has been relatively sparse in dance. The present study explores ballet dancers' experiences of performance anxiety in relation to: 1. symptom type, intensity, and directional interpretation; 2. experience level (including company rank); and 3. self-confidence and psychological skills. Fifteen elite ballet dancers representing all ranks in one company were interviewed, and qualitative content analysis was conducted. Results revealed that cognitive anxiety was more dominant than somatic anxiety, and was unanimously interpreted as debilitative to performance. Somatic anxiety was more likely to be interpreted as facilitative, with the majority of dancers recognizing that a certain amount of anxiety could be beneficial to performance. Principal dancers suffered from higher intensities of performance anxiety than corps de ballet members. Feeling out of control emerged as a major theme in both the experience of anxiety and its interpretation. As a result, prevention or handling of anxiety symptoms may be accomplished by helping dancers to feel in control. Dancers may benefit from education about anxiety symptoms and their interpretation, in addition to psychological skills training incorporating cognitive restructuring strategies and problem-focussed coping to help increase their feelings of being in control.

  13. Achievement-Relevant Personality: Relations with the Big Five and Validation of an Efficient Instrument

    PubMed Central

    Briley, Daniel A.; Domiteaux, Matthew; Tucker-Drob, Elliot M.

    2014-01-01

    Many achievement-relevant personality measures (APMs) have been developed, but the interrelations among APMs or associations with the broader personality landscape are not well-known. In Study 1, 214 participants were measured on 36 APMs and a measure of the Big Five. Factor analytic results supported the convergent and discriminant validity of five latent dimensions: performance, mastery, self-doubt, effort, and intellectual investment. Conscientiousness, neuroticism, and openness to experience had the most consistent associations with APMs. We constructed a more efficient scale– the Multidimensional Achievement-Relevant Personality Scale (MAPS). In Study 2, we replicated the factor structure and external correlates of the MAPS in a sample of 359 individuals. Finally, we validated the MAPS with four indicators of academic performance and demonstrated incremental validity. PMID:24839374

  14. Using Performance Tools to Support Experiments in HPC Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naughton, III, Thomas J; Boehm, Swen; Engelmann, Christian

    2014-01-01

    The high performance computing (HPC) community is working to address fault tolerance and resilience concerns for current and future large scale computing platforms. This is driving enhancements in the programming environ- ments, specifically research on enhancing message passing libraries to support fault tolerant computing capabilities. The community has also recognized that tools for resilience experimentation are greatly lacking. However, we argue that there are several parallels between performance tools and resilience tools . As such, we believe the rich set of HPC performance-focused tools can be extended (repurposed) to benefit the resilience community. In this paper, we describe the initialmore » motivation to leverage standard HPC per- formance analysis techniques to aid in developing diagnostic tools to assist fault tolerance experiments for HPC applications. These diagnosis procedures help to provide context for the system when the errors (failures) occurred. We describe our initial work in leveraging an MPI performance trace tool to assist in provid- ing global context during fault injection experiments. Such tools will assist the HPC resilience community as they extend existing and new application codes to support fault tolerances.« less

  15. Predicting Job Performance for the Visually Impaired: Validity of the Fine Finger Dexterity Work Task.

    ERIC Educational Resources Information Center

    Giesen, J. Martin; And Others

    The study was designed to determine the reliability and criterion validity of a psychomotor performance test (the Fine Finger Dexterity Work Task Unit) with 40 partially or totally blind adults. Reliability was established by using the test-retest method. A supervisory rating was developed and the reliability established by using the split-half…

  16. Beyond Performance: A Motivational Experiences Model of Stereotype Threat

    PubMed Central

    Thoman, Dustin B.; Smith, Jessi L.; Brown, Elizabeth R.; Chase, Justin; Lee, Joo Young K.

    2013-01-01

    The contributing role of stereotype threat (ST) to learning and performance decrements for stigmatized students in highly evaluative situations has been vastly documented and is now widely known by educators and policy makers. However, recent research illustrates that underrepresented and stigmatized students’ academic and career motivations are influenced by ST more broadly, particularly through influences on achievement orientations, sense of belonging, and intrinsic motivation. Such a focus moves conceptualizations of ST effects in education beyond the influence on a student’s performance, skill level, and feelings of self-efficacy per se to experiencing greater belonging uncertainty and lower interest in stereotyped tasks and domains. These negative experiences are associated with important outcomes such as decreased persistence and domain identification, even among students who are high in achievement motivation. In this vein, we present and review support for the Motivational Experience Model of ST, a self-regulatory model framework for integrating research on ST, achievement goals, sense of belonging, and intrinsic motivation to make predictions for how stigmatized students’ motivational experiences are maintained or disrupted, particularly over long periods of time. PMID:23894223

  17. Validation and Continued Development of Methods for Spheromak Simulation

    NASA Astrophysics Data System (ADS)

    Benedett, Thomas

    2017-10-01

    The HIT-SI experiment has demonstrated stable sustainment of spheromaks. Determining how the underlying physics extrapolate to larger, higher-temperature regimes is of prime importance in determining the viability of the inductively-driven spheromak. It is thus prudent to develop and validate a computational model that can be used to study current results and study the effect of possible design choices on plasma behavior. An extended MHD model has shown good agreement with experimental data at 14 kHz injector operation. Efforts to extend the existing validation to a range of higher frequencies (36, 53, 68 kHz) using the PSI-Tet 3D extended MHD code will be presented, along with simulations of potential combinations of flux conserver features and helicity injector configurations and their impact on current drive performance, density control, and temperature for future SIHI experiments. Work supported by USDoE.

  18. Performance Assessment in the PILOT Experiment On Board Space Stations Mir and ISS.

    PubMed

    Johannes, Bernd; Salnitski, Vyacheslav; Dudukin, Alexander; Shevchenko, Lev; Bronnikov, Sergey

    2016-06-01

    The aim of this investigation into the performance and reliability of Russian cosmonauts in hand-controlled docking of a spacecraft on a space station (experiment PILOT) was to enhance overall mission safety and crew training efficiency. The preliminary findings on the Mir space station suggested that a break in docking training of about 90 d significantly degraded performance. Intensified experiment schedules on the International Space Station (ISS) have allowed for a monthly experiment using an on-board simulator. Therefore, instead of just three training tasks as on Mir, five training flights per session have been implemented on the ISS. This experiment was run in parallel but independently of the operational docking training the cosmonauts receive. First, performance was compared between the experiments on the two space stations by nonparametric testing. Performance differed significantly between space stations preflight, in flight, and postflight. Second, performance was analyzed by modeling the linear mixed effects of all variances (LME). The fixed factors space station, mission phases, training task numbers, and their interaction were analyzed. Cosmonauts were designated as a random factor. All fixed factors were found to be significant and the interaction between stations and mission phase was also significant. In summary, performance on the ISS was shown to be significantly improved, thus enhancing mission safety. Additional approaches to docking performance assessment and prognosis are presented and discussed.

  19. Results From Phase-1 and Phase-2 GOLD Experiments

    NASA Technical Reports Server (NTRS)

    Wilson, K.; Jeganathan, M.; Lesh, J. R.; James, J.; Xu, G.

    1997-01-01

    The Ground/Orbiter Lasercomm Demonstration conducted between the Japanese Engineering Test Satellite (ETS-VI) and the ground station at JPL's Table Mountain Facility, Wrightwood, California, was the rst ground-to-space two-way optical communications experiment. Experiment objectives included validating the performance predictions of the optical link. Atmospheric attenuation and seeing measurements were made during the experiment, and data were analyzed. Downlink telemetry data recovered over the course of the experiment provided information on in-orbit performance of the ETS-VI's laser communications equipment. Biterror rates as low as 10 4 were measured on the uplink and 10 5 on the downlink. Measured signal powers agreed well with theoretical predictions.

  20. Development and validation of a high-fidelity phonomicrosurgical trainer.

    PubMed

    Klein, Adam M; Gross, Jennifer

    2017-04-01

    To validate the use of a high-fidelity phonomicrosurgical trainer. A high-fidelity phonomicrosurgical trainer, based on a previously validated model by Contag et al., 1 was designed with multilayered vocal folds that more closely mimic the consistency of true vocal folds, containing intracordal lesions to practice phonomicrosurgical removal. A training module was developed to simulate the true phonomicrosurgical experience. A validation study with novice and expert surgeons was conducted. Novices and experts were instructed to remove the lesion from the synthetic vocal folds, and novices were given four training trials. Performances were measured by the amount of time spent and tissue injury (microflap, superficial, deep) to the vocal fold. An independent Student t test and Fisher exact tests were used to compare subjects. A matched-paired t test and Wilcoxon signed rank tests were used to compare novice performance on the first and fourth trials and assess for improvement. Experts completed the excision with less total errors than novices (P = .004) and made less injury to the microflap (P = .05) and superficial tissue (P = .003). Novices improved their performance with training, making less total errors (P = .002) and superficial tissue injuries (P = .02) and spending less time for removal (P = .002) after several practice trials. This high-fidelity phonomicrosurgical trainer has been validated for novice surgeons. It can distinguish between experts and novices; and after training, it helped to improve novice performance. N/A. Laryngoscope, 127:888-893, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  1. NASA's Rodent Research Project: Validation of Capabilities for Conducting Long Duration Experiments in Space

    NASA Technical Reports Server (NTRS)

    Choi, Sungshin Y.; Cole, Nicolas; Reyes, America; Lai, San-Huei; Klotz, Rebecca; Beegle, Janet E.; Wigley, Cecilia L.; Pletcher, David; Globus, Ruth K.

    2015-01-01

    Research using rodents is an essential tool for advancing biomedical research on Earth and in space. Prior rodent experiments on the Shuttle were limited by the short flight duration. The International Space Station (ISS) provides a new platform for conducting rodent experiments under long duration conditions. Rodent Research (RR)-1 was conducted to validate flight hardware, operations, and science capabilities that were developed at the NASA Ames Research Center. Twenty C57BL6J adult female mice were launched on Sept 21, 2014 in a Dragon Capsule (SpaceX-4), then transferred to the ISS for a total time of 21-22 days (10 commercial mice) or 37 days (10 validation mice). Tissues collected on-orbit were either rapidly frozen or preserved in RNAlater at -80C (n2group) until their return to Earth. Remaining carcasses on-orbit were rapidly frozen for dissection post-flight. The three controls groups at Kennedy Space Center consisted of: Basal mice euthanized at the time of launch, Vivarium controls housed in standard cages, and Ground Controls (GC) housed in flight hardware within an environmental chamber. Upon return to Earth, there were no differences in body weights between Flight (FLT) and GC at the end of the 37 days in space. Liver enzyme activity levels of FLT mice and all control mice were similar in magnitude to those of the samples that were processed under optimal conditions in the laboratory. Liver samples dissected on-orbit yielded high quality RNA (RIN8.99+-0.59, n7). Liver samples dissected post-flight from the intact, frozen FLT carcasses yielded RIN of 7.27 +- 0.52 (n6). Additionally, wet weights of various tissues were measured. Adrenal glands and spleen showed no significant differences in FLT compared to GC although thymus and livers weights were significantly greater in FLT compared to GC. Over 3,000 tissue aliquots collected post-flight from the four groups of mice were deposited into the Ames Life Science Data Archives for future Biospecimen

  2. Development and content validation of performance assessments for endoscopic third ventriculostomy.

    PubMed

    Breimer, Gerben E; Haji, Faizal A; Hoving, Eelco W; Drake, James M

    2015-08-01

    This study aims to develop and establish the content validity of multiple expert rating instruments to assess performance in endoscopic third ventriculostomy (ETV), collectively called the Neuro-Endoscopic Ventriculostomy Assessment Tool (NEVAT). The important aspects of ETV were identified through a review of current literature, ETV videos, and discussion with neurosurgeons, fellows, and residents. Three assessment measures were subsequently developed: a procedure-specific checklist (CL), a CL of surgical errors, and a global rating scale (GRS). Neurosurgeons from various countries, all identified as experts in ETV, were then invited to participate in a modified Delphi survey to establish the content validity of these instruments. In each Delphi round, experts rated their agreement including each procedural step, error, and GRS item in the respective instruments on a 5-point Likert scale. Seventeen experts agreed to participate in the study and completed all Delphi rounds. After item generation, a total of 27 procedural CL items, 26 error CL items, and 9 GRS items were posed to Delphi panelists for rating. An additional 17 procedural CL items, 12 error CL items, and 1 GRS item were added by panelists. After three rounds, strong consensus (>80% agreement) was achieved on 35 procedural CL items, 29 error CL items, and 10 GRS items. Moderate consensus (50-80% agreement) was achieved on an additional 7 procedural CL items and 1 error CL item. The final procedural and error checklist contained 42 and 30 items, respectively (divided into setup, exposure, navigation, ventriculostomy, and closure). The final GRS contained 10 items. We have established the content validity of three ETV assessment measures by iterative consensus of an international expert panel. Each measure provides unique assessment information and thus can be used individually or in combination, depending on the characteristics of the learner and the purpose of the assessment. These instruments must now

  3. Performance and Symptom Validity Testing as a Function of Medical Board Evaluation in U.S. Military Service Members with a History of Mild Traumatic Brain Injury.

    PubMed

    Armistead-Jehle, Patrick; Cole, Wesley R; Stegman, Robert L

    2018-02-01

    The study was designed to replicate and extend pervious findings demonstrating the high rates of invalid neuropsychological testing in military service members (SMs) with a history of mild traumatic brain injury (mTBI) assessed in the context of a medical evaluation board (MEB). Two hundred thirty-one active duty SMs (61 of which were undergoing an MEB) underwent neuropsychological assessment. Performance validity (Word Memory Test) and symptom validity (MMPI-2-RF) test data were compared across those evaluated within disability (MEB) and clinical contexts. As with previous studies, there were significantly more individuals in an MEB context that failed performance (MEB = 57%, non-MEB = 31%) and symptom validity testing (MEB = 57%, non-MEB = 22%) and performance validity testing had a notable affect on cognitive test scores. Performance and symptom validity test failure rates did not vary as a function of the reason for disability evaluation when divided into behavioral versus physical health conditions. These data are consistent with past studies, and extends those studies by including symptom validity testing and investigating the effect of reason for MEB. This and previous studies demonstrate that more than 50% of SMs seen in the context of an MEB will fail performance validity tests and over-report on symptom validity measures. These results emphasize the importance of using both performance and symptom validity testing when evaluating SMs with a history of mTBI, especially if they are being seen for disability evaluations, in order to ensure the accuracy of cognitive and psychological test data. Published by Oxford University Press 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  4. Construct validity of the LapVR virtual-reality surgical simulator.

    PubMed

    Iwata, Naoki; Fujiwara, Michitaka; Kodera, Yasuhiro; Tanaka, Chie; Ohashi, Norifumi; Nakayama, Goro; Koike, Masahiko; Nakao, Akimasa

    2011-02-01

    Laparoscopic surgery requires fundamental skills peculiar to endoscopic procedures such as eye-hand coordination. Acquisition of such skills prior to performing actual surgery is highly desirable for favorable outcome. Virtual-reality simulators have been developed for both surgical training and assessment of performance. The aim of the current study is to show construct validity of a novel simulator, LapVR (Immersion Medical, San Jose, CA, USA), for Japanese surgeons and surgical residents. Forty-four subjects were divided into the following three groups according to their experience in laparoscopic surgery: 14 residents (RE) with no experience in laparoscopic surgery, 14 junior surgeons (JR) with little experience, and 16 experienced surgeons (EX). All subjects executed "essential task 1" programmed in the LapVR, which consists of six tasks, resulting in automatic measurement of 100 parameters indicating various aspects of laparoscopic skills. Time required for each task tended to be inversely correlated with experience in laparoscopic surgery. For the peg transfer skill, statistically significant differences were observed between EX and RE in three parameters, including total time and average time taken to complete the procedure and path length for the nondominant hand. For the cutting skill, similar differences were observed between EX and RE in total time, number of unsuccessful cutting attempts, and path length for the nondominant hand. According to the programmed comprehensive evaluation, performance in terms of successful completion of the task and actual experience of the participants in laparoscopic surgery correlated significantly for the peg transfer (P=0.007) and cutting skills (P=0.026). The peg transfer and cutting skills could best distinguish between EX and RE. This study is the first to provide evidence that LapVR has construct validity to discriminate between novice and experienced laparoscopic surgeons.

  5. Examining the validity of the Homework Performance Questionnaire: Multi-informant assessment in elementary and middle school.

    PubMed

    Power, Thomas J; Watkins, Marley W; Mautone, Jennifer A; Walcott, Christy M; Coutts, Michael J; Sheridan, Susan M

    2015-06-01

    Methods for measuring homework performance have been limited primarily to parent reports of homework deficits. The Homework Performance Questionnaire (HPQ) was developed to assess the homework functioning of students in Grades 1 to 8 from the perspective of both teachers and parents. The purpose of this study was to examine the factorial validity of teacher and parent versions of this scale, and to evaluate gender and grade-level differences in factor scores. The HPQ was administered in 4 states from varying regions of the United States. The validation sample consisted of students (n = 511) for whom both parent and teacher ratings were obtained (52% female, mean of 9.5 years of age, 79% non-Hispanic, and 78% White). The cross-validation sample included 1,450 parent ratings and 166 teacher ratings with similar demographic characteristics. The results of confirmatory factor analyses demonstrated that the best-fitting model for teachers was a bifactor solution including a general factor and 2 orthogonal factors, referring to student self-regulation and competence. The best-fitting model for parents was also a bifactor solution, including a general factor and 3 orthogonal factors, referring to student self-regulation, student competence, and teacher support of homework. Gender differences were identified for the general and self-regulation factors of both versions. Overall, the findings provide strong support for the HPQ as a multi-informant, multidimensional measure of homework performance that has utility for the assessment of elementary and middle school students. (c) 2015 APA, all rights reserved).

  6. Deconstructing Global Markets through Critical Performative Experiences in Puerto Rico

    ERIC Educational Resources Information Center

    Medina, Carmen Liliana; Weltsek, Gustave J.

    2013-01-01

    Critical Performative Pedagogies, the idea that "The nature of drama as a once removed creative experience turns non-critical implicit classroom identity formation into explicit identity performance as it asks participants to actively reflect upon how identity is created and engaged within fictional social interactions." (Weltsek and…

  7. Parents' self-efficacy, outcome expectations, and self-reported task performance when managing atopic dermatitis in children: instrument reliability and validity.

    PubMed

    Mitchell, Amy E; Fraser, Jennifer A

    2011-02-01

    Support and education for parents faced with managing a child with atopic dermatitis is crucial to the success of current treatments. Interventions aiming to improve parent management of this condition are promising. Unfortunately, evaluation is hampered by lack of precise research tools to measure change. To develop a suite of valid and reliable research instruments to appraise parents' self-efficacy for performing atopic dermatitis management tasks; outcome expectations of performing management tasks; and self-reported task performance in a community sample of parents of children with atopic dermatitis. The Parents' Eczema Management Scale (PEMS) and the Parents' Outcome Expectations of Eczema Management Scale (POEEMS) were developed from an existing self-efficacy scale, the Parental Self-Efficacy with Eczema Care Index (PASECI). Each scale was presented in a single self-administered questionnaire, to measure self-efficacy, outcome expectations, and self-reported task performance related to managing child atopic dermatitis. Each was tested with a community sample of parents of children with atopic dermatitis, and psychometric evaluation of the scales' reliability and validity was conducted. A community-based convenience sample of 120 parents of children with atopic dermatitis completed the self-administered questionnaire. Participants were recruited through schools across Australia. Satisfactory internal consistency and test-retest reliability was demonstrated for all three scales. Construct validity was satisfactory, with positive relationships between self-efficacy for managing atopic dermatitis and general perceived self-efficacy; self-efficacy for managing atopic dermatitis and self-reported task performance; and self-efficacy for managing atopic dermatitis and outcome expectations. Factor analyses revealed two-factor structures for PEMS and PASECI alike, with both scales containing factors related to performing routine management tasks, and managing the

  8. Assessing students' communication skills: validation of a global rating.

    PubMed

    Scheffer, Simone; Muehlinghaus, Isabel; Froehmel, Annette; Ortwein, Heiderose

    2008-12-01

    Communication skills training is an accepted part of undergraduate medical programs nowadays. In addition to learning experiences its importance should be emphasised by performance-based assessment. As detailed checklists have been shown to be not well suited for the assessment of communication skills for different reasons, this study aimed to validate a global rating scale. A Canadian instrument was translated to German and adapted to assess students' communication skills during an end-of-semester-OSCE. Subjects were second and third year medical students at the reformed track of the Charité-Universitaetsmedizin Berlin. Different groups of raters were trained to assess students' communication skills using the global rating scale. Validity testing included concurrent validity and construct validity: Judgements of different groups of raters were compared to expert ratings as a defined gold standard. Furthermore, the amount of agreement between scores obtained with this global rating scale and a different instrument for assessing communication skills was determined. Results show that communication skills can be validly assessed by trained non-expert raters as well as standardised patients using this instrument.

  9. Validity, Reliability, and Equity Issues in an Observational Talent Assessment Process in the Performing Arts

    ERIC Educational Resources Information Center

    Oreck, Barry A.; Owen, Steven V.; Baum, Susan M.

    2003-01-01

    The lack of valid, research-based methods to identify potential artistic talent hampers the inclusion of the arts in programs for the gifted and talented. The Talent Assessment Process in Dance, Music, and Theater (D/M/T TAP) was designed to identify potential performing arts talent in diverse populations, including bilingual and special education…

  10. Development and Validation of an Internet Use Attitude Scale

    ERIC Educational Resources Information Center

    Zhang, Yixin

    2007-01-01

    This paper describes the development and validation of a new 40-item Internet Attitude Scale (IAS), a one-dimensional inventory for measuring the Internet attitudes. The first experiment initiated a generic Internet attitude questionnaire, ensured construct validity, and examined factorial validity and reliability. The second experiment further…

  11. Development and initial validation of the Parental PELICAN Questionnaire (PaPEQu)--an instrument to assess parental experiences and needs during their child's end-of-life care.

    PubMed

    Zimmermann, Karin; Cignacco, Eva; Eskola, Katri; Engberg, Sandra; Ramelet, Anne-Sylvie; Von der Weid, Nicolas; Bergstraesser, Eva

    2015-12-01

    To develop and test the Parental PELICAN Questionnaire, an instrument to retrospectively assess parental experiences and needs during their child's end-of-life care. To offer appropriate care for dying children, healthcare professionals need to understand the illness experience from the family perspective. A questionnaire specific to the end-of-life experiences and needs of parents losing a child is needed to evaluate the perceived quality of paediatric end-of-life care. This is an instrument development study applying mixed methods based on recommendations for questionnaire design and validation. The Parental PELICAN Questionnaire was developed in four phases between August 2012-March 2014: phase 1: item generation; phase 2: validity testing; phase 3: translation; phase 4: pilot testing. Psychometric properties were assessed after applying the Parental PELICAN Questionnaire in a sample of 224 bereaved parents in April 2014. Validity testing covered the evidence based on tests of content, internal structure and relations to other variables. The Parental PELICAN Questionnaire consists of approximately 90 items in four slightly different versions accounting for particularities of the four diagnostic groups. The questionnaire's items were structured according to six quality domains described in the literature. Evidence of initial validity and reliability could be demonstrated with the involvement of healthcare professionals and bereaved parents. The Parental PELICAN Questionnaire holds promise as a measure to assess parental experiences and needs and is applicable to a broad range of paediatric specialties and settings. Future validation is needed to evaluate its suitability in different cultures. © 2015 John Wiley & Sons Ltd.

  12. Validation of OpenFoam for heavy gas dispersion applications.

    PubMed

    Mack, A; Spruijt, M P N

    2013-11-15

    In the present paper heavy gas dispersion calculations were performed with OpenFoam. For a wind tunnel test case, numerical data was validated with experiments. For a full scale numerical experiment, a code to code comparison was performed with numerical results obtained from Fluent. The validation was performed in a gravity driven environment (slope), where the heavy gas induced the turbulence. For the code to code comparison, a hypothetical heavy gas release into a strongly turbulent atmospheric boundary layer including terrain effects was selected. The investigations were performed for SF6 and CO2 as heavy gases applying the standard k-ɛ turbulence model. A strong interaction of the heavy gas with the turbulence is present which results in a strong damping of the turbulence and therefore reduced heavy gas mixing. Especially this interaction, based on the buoyancy effects, was studied in order to ensure that the turbulence-buoyancy coupling is the main driver for the reduced mixing and not the global behaviour of the turbulence modelling. For both test cases, comparisons were performed between OpenFoam and Fluent solutions which were mainly in good agreement with each other. Beside steady state solutions, the time accuracy was investigated. In the low turbulence environment (wind tunnel test) which for both codes (laminar solutions) was in good agreement, also with the experimental data. The turbulent solutions of OpenFoam were in much better agreement with the experimental results than the Fluent solutions. Within the strong turbulence environment, both codes showed an excellent comparability. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Concurrent and convergent validity of the mobility- and multidimensional-hierarchical disability categorization models with physical performance in community older adults.

    PubMed

    Hu, Ming-Hsia; Yeh, Chih-Jun; Chen, Tou-Rong; Wang, Ching-Yi

    2014-01-01

    A valid, time-efficient and easy-to-use instrument is important for busy clinical settings, large scale surveys, or community screening use. The purpose of this study was to validate the mobility hierarchical disability categorization model (an abbreviated model) by investigating its concurrent validity with the multidimensional hierarchical disability categorization model (a comprehensive model) and triangulating both models with physical performance measures in older adults. 604 community-dwelling older adults of at least 60 years in age volunteered to participate. Self-reported function on mobility, instrumental activities of daily living (IADL) and activities of daily living (ADL) domains were recorded and then the disability status determined based on both the multidimensional hierarchical categorization model and the mobility hierarchical categorization model. The physical performance measures, consisting of grip strength and usual and fastest gait speeds (UGS, FGS), were collected on the same day. Both categorization models showed high correlation (γs = 0.92, p < 0.001) and agreement (kappa = 0.61, p < 0.0001). Physical performance measures demonstrated significant different group means among the disability subgroups based on both categorization models. The results of multiple regression analysis indicated that both models individually explain similar amount of variance on all physical performances, with adjustments for age, sex, and number of comorbidities. Our results found that the mobility hierarchical disability categorization model is a valid and time efficient tool for large survey or screening use.

  14. Comparing surgical experience with performance on a sinus surgery simulator.

    PubMed

    Diment, Laura E; Ruthenbeck, Greg S; Dharmawardana, Nuwan; Carney, A Simon; Woods, Charmaine M; Ooi, Eng H; Reynolds, Karen J

    2016-12-01

    This study evaluates whether surgical experience influences technical competence using the Flinders sinus surgery simulator, a virtual environment designed to teach nasal endoscopic surgical skills. Ten experienced sinus surgeons (five consultants and five registrars) and 14 novices (seven resident medical officers and seven interns/medical students) completed three simulation tasks using haptic controllers. Task 1 required navigation of the sinuses and identification of six anatomical landmarks, Task 2 required removal of unhealthy tissue while preserving healthy tissue and Task 3 entailed backbiting within pre-set lines on the uncinate process and microdebriding tissue between the cuts. Novices were compared with experts on a range of measures, using Mann-Whitney U -tests. Novices took longer on all tasks (Task 1: 278%, P < 0.005; Task 2: 112%, P < 0.005; Task 3: 72%, P < 0.005). In Task 1, novices' instruments travelled further than experts' (379%, P < 0.005), and provided greater maximum force (12%, P < 0.05). In Tasks 2 and 3 novices performed more cutting movements to remove the tissue (Task 2: 1500%, P < 0.005; Task 3: 72%, P < 0.005). Experts also completed more of Task 3 (66%, P < 0.05). The study demonstrated the Flinders sinus simulator's construct validity, differentiating between experts and novices with respect to procedure time, instrument distance travelled and number of cutting motions to complete the task. © 2015 Royal Australasian College of Surgeons.

  15. Validity: Applying Current Concepts and Standards to Gynecologic Surgery Performance Assessments

    ERIC Educational Resources Information Center

    LeClaire, Edgar L.; Nihira, Mikio A.; Hardré, Patricia L.

    2015-01-01

    Validity is critical for meaningful assessment of surgical competency. According to the Standards for Educational and Psychological Testing, validation involves the integration of data from well-defined classifications of evidence. In the authoritative framework, data from all classifications support construct validity claims. The two aims of this…

  16. Endogenous protein "barcode" for data validation and normalization in quantitative MS analysis.

    PubMed

    Lee, Wooram; Lazar, Iulia M

    2014-07-01

    Quantitative proteomic experiments with mass spectrometry detection are typically conducted by using stable isotope labeling and label-free quantitation approaches. Proteins with housekeeping functions and stable expression level such actin, tubulin, and glyceraldehyde-3-phosphate dehydrogenase are frequently used as endogenous controls. Recent studies have shown that the expression level of such common housekeeping proteins is, in fact, dependent on various factors such as cell type, cell cycle, or disease status and can change in response to a biochemical stimulation. The interference of such phenomena can, therefore, substantially compromise their use for data validation, alter the interpretation of results, and lead to erroneous conclusions. In this work, we advance the concept of a protein "barcode" for data normalization and validation in quantitative proteomic experiments. The barcode comprises a novel set of proteins that was generated from cell cycle experiments performed with MCF7, an estrogen receptor positive breast cancer cell line, and MCF10A, a nontumorigenic immortalized breast cell line. The protein set was selected from a list of ~3700 proteins identified in different cellular subfractions and cell cycle stages of MCF7/MCF10A cells, based on the stability of spectral count data generated with an LTQ ion trap mass spectrometer. A total of 11 proteins qualified as endogenous standards for the nuclear and 62 for the cytoplasmic barcode, respectively. The validation of the protein sets was performed with a complementary SKBR3/Her2+ cell line.

  17. Validation of a Novel Virtual Reality Simulator for Robotic Surgery

    PubMed Central

    Schreuder, Henk W. R.; Persson, Jan E. U.; Wolswijk, Richard G. H.; Ihse, Ingmar; Schijven, Marlies P.; Verheijen, René H. M.

    2014-01-01

    Objective. With the increase in robotic-assisted laparoscopic surgery there is a concomitant rising demand for training methods. The objective was to establish face and construct validity of a novel virtual reality simulator (dV-Trainer, Mimic Technologies, Seattle, WA) for the use in training of robot-assisted surgery. Methods. A comparative cohort study was performed. Participants (n = 42) were divided into three groups according to their robotic experience. To determine construct validity, participants performed three different exercises twice. Performance parameters were measured. To determine face validity, participants filled in a questionnaire after completion of the exercises. Results. Experts outperformed novices in most of the measured parameters. The most discriminative parameters were “time to complete” and “economy of motion” (P < 0.001). The training capacity of the simulator was rated 4.6 ± 0.5 SD on a 5-point Likert scale. The realism of the simulator in general, visual graphics, movements of instruments, interaction with objects, and the depth perception were all rated as being realistic. The simulator is considered to be a very useful training tool for residents and medical specialist starting with robotic surgery. Conclusions. Face and construct validity for the dV-Trainer could be established. The virtual reality simulator is a useful tool for training robotic surgery. PMID:24600328

  18. 40 CFR 1065.550 - Gas analyzer range validation and drift validation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... a dry sample measured with a CLD and the removed water is corrected based on measured CO2, CO, THC... may not validate the concentration subcomponents (e.g., THC and CH4 for NMHC) separately. For example, for NMHC measurements, perform drift validation on NMHC; do not validate THC and CH4 separately. (2...

  19. 40 CFR 1065.550 - Gas analyzer range validation and drift validation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... a dry sample measured with a CLD and the removed water is corrected based on measured CO2, CO, THC... may not validate the concentration subcomponents (e.g., THC and CH4 for NMHC) separately. For example, for NMHC measurements, perform drift validation on NMHC; do not validate THC and CH4 separately. (2...

  20. Perform light and optic experiments in Augmented Reality

    NASA Astrophysics Data System (ADS)

    Wozniak, Peter; Vauderwange, Oliver; Curticapean, Dan; Javahiraly, Nicolas; Israel, Kai

    2015-10-01

    In many scientific studies lens experiments are part of the curriculum. The conducted experiments are meant to give the students a basic understanding for the laws of optics and its applications. Most of the experiments need special hardware like e.g. an optical bench, light sources, apertures and different lens types. Therefore it is not possible for the students to conduct any of the experiments outside of the university's laboratory. Simple optical software simulators enabling the students to virtually perform lens experiments already exist, but are mostly desktop or web browser based. Augmented Reality (AR) is a special case of mediated and mixed reality concepts, where computers are used to add, subtract or modify one's perception of reality. As a result of the success and widespread availability of handheld mobile devices, like e.g. tablet computers and smartphones, mobile augmented reality applications are easy to use. Augmented reality can be easily used to visualize a simulated optical bench. The students can interactively modify properties like e.g. lens type, lens curvature, lens diameter, lens refractive index and the positions of the instruments in space. Light rays can be visualized and promote an additional understanding of the laws of optics. An AR application like this is ideally suited to prepare the actual laboratory sessions and/or recap the teaching content. The authors will present their experience with handheld augmented reality applications and their possibilities for light and optic experiments without the needs for specialized optical hardware.

  1. Cygnus Performance in Subcritical Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    G. Corrow, M. Hansen, D. Henderson, S. Lutz, C. Mitton, et al.

    2008-02-01

    The Cygnus Dual Beam Radiographic Facility consists of two identical radiographic sources with the following specifications: 4-rad dose at 1 m, 1-mm spot size, 50-ns pulse length, 2.25-MeV endpoint energy. The facility is located in an underground tunnel complex at the Nevada Test Site. Here SubCritical Experiments (SCEs) are performed to study the dynamic properties of plutonium. The Cygnus sources were developed as a primary diagnostic for these tests. Since SCEs are single-shot, high-value events - reliability and reproducibility are key issues. Enhanced reliability involves minimization of failure modes through design, inspection, and testing. Many unique hardware and operational featuresmore » were incorporated into Cygnus to insure reliability. Enhanced reproducibility involves normalization of shot-to-shot output also through design, inspection, and testing. The first SCE to utilize Cygnus, Armando, was executed on May 25, 2004. A year later, April - May 2005, calibrations using a plutonium step wedge were performed. The results from this series were used for more precise interpretation of the Armando data. In the period February - May 2007 Cygnus was fielded on Thermos, which is a series of small-sample plutonium shots using a one-dimensional geometry. Pulsed power research generally dictates frequent change in hardware configuration. Conversely, SCE applications have typically required constant machine settings. Therefore, while operating during the past four years we have accumulated a large database for evaluation of machine performance under highly consistent operating conditions. Through analysis of this database Cygnus reliability and reproducibility on Armando, Step Wedge, and Thermos is presented.« less

  2. Multivariate meta-analysis of individual participant data helped externally validate the performance and implementation of a prediction model.

    PubMed

    Snell, Kym I E; Hua, Harry; Debray, Thomas P A; Ensor, Joie; Look, Maxime P; Moons, Karel G M; Riley, Richard D

    2016-01-01

    Our aim was to improve meta-analysis methods for summarizing a prediction model's performance when individual participant data are available from multiple studies for external validation. We suggest multivariate meta-analysis for jointly synthesizing calibration and discrimination performance, while accounting for their correlation. The approach estimates a prediction model's average performance, the heterogeneity in performance across populations, and the probability of "good" performance in new populations. This allows different implementation strategies (e.g., recalibration) to be compared. Application is made to a diagnostic model for deep vein thrombosis (DVT) and a prognostic model for breast cancer mortality. In both examples, multivariate meta-analysis reveals that calibration performance is excellent on average but highly heterogeneous across populations unless the model's intercept (baseline hazard) is recalibrated. For the cancer model, the probability of "good" performance (defined by C statistic ≥0.7 and calibration slope between 0.9 and 1.1) in a new population was 0.67 with recalibration but 0.22 without recalibration. For the DVT model, even with recalibration, there was only a 0.03 probability of "good" performance. Multivariate meta-analysis can be used to externally validate a prediction model's calibration and discrimination performance across multiple populations and to evaluate different implementation strategies. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  3. Development and co-validation of porcine insulin certified reference material by high-performance liquid chromatography-isotope dilution mass spectrometry.

    PubMed

    Wu, Liqing; Takatsu, Akiko; Park, Sang-Ryoul; Yang, Bin; Yang, Huaxin; Kinumi, Tomoya; Wang, Jing; Bi, Jiaming; Wang, Yang

    2015-04-01

    This article concerns the development and co-validation of a porcine insulin (pINS) certified reference material (CRM) produced by the National Institute of Metrology, People's Republic of China. Each CRM unit contained about 15 mg of purified solid pINS. The moisture content, amount of ignition residue, molecular mass, and purity of the pINS were measured. Both high-performance liquid chromatography-isotope dilution mass spectrometry and a purity deduction method were used to determine the mass fraction of the pINS. Fifteen units were selected to study the between-bottle homogeneity, and no inhomogeneity was observed. A stability study concluded that the CRM was stable for at least 12 months at -20 °C. The certified value of the CRM was (0.892 ± 0.036) g/g. A co-validation of the CRM was performed among Chinese, Japanese, and Korean laboratories under the framework of the Asian Collaboration on Reference Materials. The co-validation results agreed well with the certified value of the CRM. Consequently, the pINS CRM may be used as a calibration material or as a validation standard for pharmaceutical purposes to improve the quality of pharmaceutical products.

  4. Patient experiences questionnaire for interdisciplinary treatment for substance dependence (PEQ-ITSD): reliability and validity following a national survey in Norway.

    PubMed

    Haugum, Mona; Iversen, Hilde Hestad; Bjertnaes, Oyvind; Lindahl, Anne Karin

    2017-02-20

    Patient experiences are an important aspect of health care quality, but there is a lack of validated instruments for their measurement in the substance dependence literature. A new questionnaire to measure inpatients' experiences of interdisciplinary treatment for substance dependence has been developed in Norway. The aim of this study was to psychometrically test the new questionnaire, using data from a national survey in 2013. The questionnaire was developed based on a literature review, qualitative interviews with patients, expert group discussions and pretesting. Data were collected in a national survey covering all residential facilities with inpatients in treatment for substance dependence in 2013. Data quality and psychometric properties were assessed, including ceiling effects, item missing, exploratory factor analysis, and tests of internal consistency reliability, test-retest reliability and construct validity. The sample included 978 inpatients present at 98 residential institutions. After correcting for excluded patients (n = 175), the response rate was 91.4%. 28 out of 33 items had less than 20.5% of missing data or replies in the "not applicable" category. All but one item met the ceiling effect criterion of less than 50.0% of the responses in the most favorable category. Exploratory factor analysis resulted in three scales: "treatment and personnel", "milieu" and "outcome". All scales showed satisfactory internal consistency reliability (Cronbach's alpha ranged from 0.75-0.91) and test-retest reliability (ICC ranged from 0.82-0.85). 17 of 18 significant associations between single variables and the scales supported construct validity of the PEQ-ITSD. The content validity of the PEQ-ITSD was secured by a literature review, consultations with an expert group and qualitative interviews with patients. The PEQ-ITSD was used in a national survey in Norway in 2013 and psychometric testing showed that the instrument had satisfactory internal consistency

  5. NASA IN-STEP Cryo System Experiment flight test

    NASA Astrophysics Data System (ADS)

    Russo, S. C.; Sugimura, R. S.

    The Cryo System Experiment (CSE), a NASA In-Space Technology Experiments Program (IN-STEP) flight experiment, was flown on Space Shuttle Discovery (STS 63) in February 1995. The experiment was developed by Hughes Aircraft Company to validate in zero- g space a 65 K cryogenic system for focal planes, optics, instruments or other equipment (gamma-ray spectrometers and infrared and submillimetre imaging instruments) that requires continuous cryogenic cooling. The CSE is funded by the NASA Office of Advanced Concepts and Technology's IN-STEP and managed by the Jet Propulsion Laboratory (JPL). The overall goal of the CSE was to validate and characterize the on-orbit performance of the two thermal management technologies that comprise a hybrid cryogenic system. These thermal management technologies consist of (1) a second-generation long-life, low-vibration, Stirling-cycle 65 K cryocooler that was used to cool a simulated thermal energy storage device (TRP) and (2) a diode oxygen heat pipe thermal switch that enables physical separation between a cryogenic refrigerator and a TRP. All CSE experiment objectives and 100% of the experiment success criteria were achieved. The level of confidence provided by this flight experiment is an important NASA and Department of Defense (DoD) milestone prior to multi-year mission commitment. Presented are generic lessons learned from the system integration of cryocoolers for a flight experiment and the recorded zero- g performance of the Stirling cryocooler and the diode oxygen heat pipe.

  6. Modal identification experiment

    NASA Technical Reports Server (NTRS)

    Kvaternik, Raymond G.

    1992-01-01

    The Modal Identification Experiment (MIE) is a proposed on-orbit experiment being developed by NASA's Office of Aeronautics and Space Technology wherein a series of vibration measurements would be made on various configurations of Space Station Freedom (SSF) during its on-orbit assembly phase. The experiment is to be conducted in conjunction with station reboost operations and consists of measuring the dynamic responses of the spacecraft produced by station-based attitude control system and reboost thrusters, recording and transmitting the data, and processing the data on the ground to identify the natural frequencies, damping factors, and shapes of significant vibratory modes. The experiment would likely be a part of the Space Station on-orbit verification. Basic research objectives of MIE are to evaluate and improve methods for analytically modeling large space structures, to develop techniques for performing in-space modal testing, and to validate candidate techniques for in-space modal identification. From an engineering point of view, MIE will provide the first opportunity to obtain vibration data for the fully-assembled structure because SSF is too large and too flexible to be tested as a single unit on the ground. Such full-system data is essential for validating the analytical model of SSF which would be used in any engineering efforts associated with structural or control system changes that might be made to the station as missions evolve over time. Extensive analytical simulations of on-orbit tests, as well exploratory laboratory simulations using small-scale models, have been conducted in-house and under contract to develop a measurement plan and evaluate its potential performance. In particular, performance trade and parametric studies conducted as part of these simulations were used to resolve issues related to the number and location of the measurements, the type of excitation, data acquisition and data processing, effects of noise and nonlinearities

  7. Pilot In-Trail Procedure Validation Simulation Study

    NASA Technical Reports Server (NTRS)

    Bussink, Frank J. L.; Murdoch, Jennifer L.; Chamberlain, James P.; Chartrand, Ryan; Jones, Kenneth M.

    2008-01-01

    A Human-In-The-Loop experiment was conducted at the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) to investigate the viability of the In-Trail Procedure (ITP) concept from a flight crew perspective, by placing participating airline pilots in a simulated oceanic flight environment. The test subject pilots used new onboard avionics equipment that provided improved information about nearby traffic and enabled them, when specific criteria were met, to request an ITP flight level change referencing one or two nearby aircraft that might otherwise block the flight level change. The subject pilots subjective assessments of ITP validity and acceptability were measured via questionnaires and discussions, and their objective performance in appropriately selecting, requesting, and performing ITP flight level changes was evaluated for each simulated flight scenario. Objective performance and subjective workload assessment data from the experiment s test conditions were analyzed for statistical and operational significance and are reported in the paper. Based on these results, suggestions are made to further improve the ITP.

  8. Constructing and Validating High-Performance MIEC-SVM Models in Virtual Screening for Kinases: A Better Way for Actives Discovery

    PubMed Central

    Sun, Huiyong; Pan, Peichen; Tian, Sheng; Xu, Lei; Kong, Xiaotian; Li, Youyong; Dan Li; Hou, Tingjun

    2016-01-01

    The MIEC-SVM approach, which combines molecular interaction energy components (MIEC) derived from free energy decomposition and support vector machine (SVM), has been found effective in capturing the energetic patterns of protein-peptide recognition. However, the performance of this approach in identifying small molecule inhibitors of drug targets has not been well assessed and validated by experiments. Thereafter, by combining different model construction protocols, the issues related to developing best MIEC-SVM models were firstly discussed upon three kinase targets (ABL, ALK, and BRAF). As for the investigated targets, the optimized MIEC-SVM models performed much better than the models based on the default SVM parameters and Autodock for the tested datasets. Then, the proposed strategy was utilized to screen the Specs database for discovering potential inhibitors of the ALK kinase. The experimental results showed that the optimized MIEC-SVM model, which identified 7 actives with IC50 < 10 μM from 50 purchased compounds (namely hit rate of 14%, and 4 in nM level) and performed much better than Autodock (3 actives with IC50 < 10 μM from 50 purchased compounds, namely hit rate of 6%, and 2 in nM level), suggesting that the proposed strategy is a powerful tool in structure-based virtual screening. PMID:27102549

  9. Upwelling Measurement Issues at the CERES Ocean Validation Experiment (COVE)

    NASA Astrophysics Data System (ADS)

    Fabbri, B. E.; Schuster, G. L.; Denn, F. M.; Arduini, R. F.; Madigan, J. J.; Rutan, D. A.

    2016-12-01

    The Clouds and the Earth's Radiant Energy System (CERES) satellite measures both solar-reflected and Earth-emitted radiation from the Earth's surface to the top of the atmosphere. One surface validation site is located at Chesapeake Light Station, approximately 25 kilometers east of Virginia Beach, Virginia (coordinates: 36.90N, 75.71W). In 1999, the CERES Ocean Validation Experiment (COVE) was established at Chesapeake Light Station. COVE is in its 17th year collecting radiometric and meteorological data. Other measurements over this time period include aerosol optical depth, water leaving radiance, precipitable water vapor and more. The issues we are trying to resolve for the upwelling flux are two-fold. First, there is the "shadow effect". In the morning, the shadow of the tower appears on the water in the field of view underneath the shortwave (SW) and longwave (LW) upwelling instruments. An attempt to understand the shading effect is made by separating the data into "shaded" and "unshaded" time periods using the Solar Azimuth (SA) angle. SA < 180 degrees are considered shaded, and SA > 180 degrees are considered unshaded. Upwelling SW shaded and unshaded datasets differ by a maximum of 9.5 W/m2 and a minimum of -0.7 W/m2 with the delta mean resulting in 3.6 W/m2. Upwelling LW shaded and unshaded datasets differ by a maximum of 8.0 W/m2 and a minimum of 1.0 W/m2 with the delta mean resulting in 3.7 W/m2. The second issue is the "tower radiating effect" which is especially noticeable on clear, sunny days. During these days, the tower tends to heat up and radiate extra heat energy that is measured by the LW instrument. We compare Infrared Radiation Thermometer (IRT) measurements to Precision Infrared Radiometer (PIR) measurements and make a case for using IRT measurements as upwelling LW.

  10. The predictive validity of the MCAT for medical school performance and medical board licensing examinations: a meta-analysis of the published research.

    PubMed

    Donnon, Tyrone; Paolucci, Elizabeth Oddone; Violato, Claudio

    2007-01-01

    To conduct a meta-analysis of published studies to determine the predictive validity of the MCAT on medical school performance and medical board licensing examinations. The authors included all peer-reviewed published studies reporting empirical data on the relationship between MCAT scores and medical school performance or medical board licensing exam measures. Moderator variables, participant characteristics, and medical school performance/medical board licensing exam measures were extracted and reviewed separately by three reviewers using a standardized protocol. Medical school performance measures from 11 studies and medical board licensing examinations from 18 studies, for a total of 23 studies, were selected. A random-effects model meta-analysis of weighted effects sizes (r) resulted in (1) a predictive validity coefficient for the MCAT in the preclinical years of r = 0.39 (95% confidence interval [CI], 0.21-0.54) and on the USMLE Step 1 of r = 0.60 (95% CI, 0.50-0.67); and (2) the biological sciences subtest as the best predictor of medical school performance in the preclinical years (r = 0.32 95% CI, 0.21-0.42) and on the USMLE Step 1 (r = 0.48 95% CI, 0.41-0.54). The predictive validity of the MCAT ranges from small to medium for both medical school performance and medical board licensing exam measures. The medical profession is challenged to develop screening and selection criteria with improved validity that can supplement the MCAT as an important criterion for admission to medical schools.

  11. Visuospatial skills and computer game experience influence the performance of virtual endoscopy.

    PubMed

    Enochsson, Lars; Isaksson, Bengt; Tour, René; Kjellin, Ann; Hedman, Leif; Wredmark, Torsten; Tsai-Felländer, Li

    2004-11-01

    Advanced medical simulators have been introduced to facilitate surgical and endoscopic training and thereby improve patient safety. Residents trained in the Procedicus Minimally Invasive Surgical Trainer-Virtual Reality (MIST-VR) laparoscopic simulator perform laparoscopic cholecystectomy safer and faster than a control group. Little has been reported regarding whether factors like gender, computer experience, and visuospatial tests can predict the performance with a medical simulator. Our aim was to investigate whether such factors influence the performance of simulated gastroscopy. Seventeen medical students were asked about computer gaming experiences. Before virtual endoscopy, they performed the visuospatial test PicCOr, which discriminates the ability of the tested person to create a three-dimensional image from a two-dimensional presentation. Each student performed one gastroscopy (level 1, case 1) in the GI Mentor II, Simbionix, and several variables related to performance were registered. Percentage of time spent with a clear view in the endoscope correlated well with the performance on the PicSOr test (r = 0.56, P < 0.001). Efficiency of screening also correlated with PicSOr (r = 0.23, P < 0.05). In students with computer gaming experience, the efficiency of screening increased (33.6% +/- 3.1% versus 22.6% +/- 2.8%, P < 0.05) and the duration of the examination decreased by 1.5 minutes (P < 0.05). A similar trend was seen in men compared with women. The visuospatial test PicSOr predicts the results with the endoscopic simulator GI Mentor II. Two-dimensional image experience, as in computer games, also seems to affect the outcome.

  12. Towards natural language question generation for the validation of ontologies and mappings.

    PubMed

    Ben Abacha, Asma; Dos Reis, Julio Cesar; Mrabet, Yassine; Pruski, Cédric; Da Silveira, Marcos

    2016-08-08

    The increasing number of open-access ontologies and their key role in several applications such as decision-support systems highlight the importance of their validation. Human expertise is crucial for the validation of ontologies from a domain point-of-view. However, the growing number of ontologies and their fast evolution over time make manual validation challenging. We propose a novel semi-automatic approach based on the generation of natural language (NL) questions to support the validation of ontologies and their evolution. The proposed approach includes the automatic generation, factorization and ordering of NL questions from medical ontologies. The final validation and correction is performed by submitting these questions to domain experts and automatically analyzing their feedback. We also propose a second approach for the validation of mappings impacted by ontology changes. The method exploits the context of the changes to propose correction alternatives presented as Multiple Choice Questions. This research provides a question optimization strategy to maximize the validation of ontology entities with a reduced number of questions. We evaluate our approach for the validation of three medical ontologies. We also evaluate the feasibility and efficiency of our mappings validation approach in the context of ontology evolution. These experiments are performed with different versions of SNOMED-CT and ICD9. The obtained experimental results suggest the feasibility and adequacy of our approach to support the validation of interconnected and evolving ontologies. Results also suggest that taking into account RDFS and OWL entailment helps reducing the number of questions and validation time. The application of our approach to validate mapping evolution also shows the difficulty of adapting mapping evolution over time and highlights the importance of semi-automatic validation.

  13. Validity of the Symbol Digit Modalities Test as a cognition performance outcome measure for multiple sclerosis

    PubMed Central

    Benedict, Ralph HB; DeLuca, John; Phillips, Glenn; LaRocca, Nicholas; Hudson, Lynn D; Rudick, Richard

    2017-01-01

    Cognitive and motor performance measures are commonly employed in multiple sclerosis (MS) research, particularly when the purpose is to determine the efficacy of treatment. The increasing focus of new therapies on slowing progression or reversing neurological disability makes the utilization of sensitive, reproducible, and valid measures essential. Processing speed is a basic elemental cognitive function that likely influences downstream processes such as memory. The Multiple Sclerosis Outcome Assessments Consortium (MSOAC) includes representatives from advocacy organizations, Food and Drug Administration (FDA), European Medicines Agency (EMA), National Institute of Neurological Disorders and Stroke (NINDS), academic institutions, and industry partners along with persons living with MS. Among the MSOAC goals is acceptance and qualification by regulators of performance outcomes that are highly reliable and valid, practical, cost-effective, and meaningful to persons with MS. A critical step for these neuroperformance metrics is elucidation of clinically relevant benchmarks, well-defined degrees of disability, and gradients of change that are deemed clinically meaningful. This topical review provides an overview of research on one particular cognitive measure, the Symbol Digit Modalities Test (SDMT), recognized as being particularly sensitive to slowed processing of information that is commonly seen in MS. The research in MS clearly supports the reliability and validity of this test and recently has supported a responder definition of SDMT change approximating 4 points or 10% in magnitude. PMID:28206827

  14. Validity of the Symbol Digit Modalities Test as a cognition performance outcome measure for multiple sclerosis.

    PubMed

    Benedict, Ralph Hb; DeLuca, John; Phillips, Glenn; LaRocca, Nicholas; Hudson, Lynn D; Rudick, Richard

    2017-04-01

    Cognitive and motor performance measures are commonly employed in multiple sclerosis (MS) research, particularly when the purpose is to determine the efficacy of treatment. The increasing focus of new therapies on slowing progression or reversing neurological disability makes the utilization of sensitive, reproducible, and valid measures essential. Processing speed is a basic elemental cognitive function that likely influences downstream processes such as memory. The Multiple Sclerosis Outcome Assessments Consortium (MSOAC) includes representatives from advocacy organizations, Food and Drug Administration (FDA), European Medicines Agency (EMA), National Institute of Neurological Disorders and Stroke (NINDS), academic institutions, and industry partners along with persons living with MS. Among the MSOAC goals is acceptance and qualification by regulators of performance outcomes that are highly reliable and valid, practical, cost-effective, and meaningful to persons with MS. A critical step for these neuroperformance metrics is elucidation of clinically relevant benchmarks, well-defined degrees of disability, and gradients of change that are deemed clinically meaningful. This topical review provides an overview of research on one particular cognitive measure, the Symbol Digit Modalities Test (SDMT), recognized as being particularly sensitive to slowed processing of information that is commonly seen in MS. The research in MS clearly supports the reliability and validity of this test and recently has supported a responder definition of SDMT change approximating 4 points or 10% in magnitude.

  15. Validating a work group climate assessment tool for improving the performance of public health organizations

    PubMed Central

    Perry, Cary; LeMay, Nancy; Rodway, Greg; Tracy, Allison; Galer, Joan

    2005-01-01

    Background This article describes the validation of an instrument to measure work group climate in public health organizations in developing countries. The instrument, the Work Group Climate Assessment Tool (WCA), was applied in Brazil, Mozambique, and Guinea to assess the intermediate outcomes of a program to develop leadership for performance improvement. Data were collected from 305 individuals in 42 work groups, who completed a self-administered questionnaire. Methods The WCA was initially validated using Cronbach's alpha reliability coefficient and exploratory factor analysis. This article presents the results of a second validation study to refine the initial analyses to account for nested data, to provide item-level psychometrics, and to establish construct validity. Analyses included eigenvalue decomposition analysis, confirmatory factor analysis, and validity and reliability analyses. Results This study confirmed the validity and reliability of the WCA across work groups with different demographic characteristics (gender, education, management level, and geographical location). The study showed that there is agreement between the theoretical construct of work climate and the items in the WCA tool across different populations. The WCA captures a single perception of climate rather than individual sub-scales of clarity, support, and challenge. Conclusion The WCA is useful for comparing the climates of different work groups, tracking the changes in climate in a single work group over time, or examining differences among individuals' perceptions of their work group climate. Application of the WCA before and after a leadership development process can help work groups hold a discussion about current climate and select a target for improvement. The WCA provides work groups with a tool to take ownership of their own group climate through a process that is simple and objective and that protects individual confidentiality. PMID:16223447

  16. Patient Experience: A Critical Indicator of Healthcare Performance.

    PubMed

    Guler, Pamela H

    2017-01-01

    Patient experience has become a critical differentiator for healthcare organizations, and it will only grow in importance as transparency and consumerism dominate the healthcare landscape. Creating and sustaining a consistently exceptional experience that promotes patient engagement and the best outcomes is far more than just "satisfying" patients, going well beyond amenities that may be provided.Perception of care experience is often shaped by methods we use to address the biopsychosocial needs of patients. Building relationships and communicating well with our patients and families are primary approaches. In a complex healthcare situation, patients may not fully understand or remember the highly clinical nature of treatment. However, they always remember how we made them feel, how we communicated with them as a team, and what interactions they experienced while in our care.Patients who are fully informed and feel connected to their caregivers are often less anxious than those who are disengaged. Informed and engaged patients are enabled to participate in their healthcare. Organizations that focus on developing an accountable culture-one that inspires caregivers to communicate in a way that connects to patients' mind, body, and spirit while leveraging standard, evidence-based patient experience practices-find that patients' perception of care, or "the patient experience," is vastly improved.Adventist Health System has embarked on a journey to patient experience excellence with a commitment to whole-person care and standard patient experience practice across the system. Recognized with several national awards, we continue to strengthen our approach toward bringing all of our campuses and patient settings to sustained high-level performance. We have found that a combination of strong, accountable leadership; a focus on employee culture; engagement of physicians; standardized patient experience practices and education; and meaningful use of patient feedback are top

  17. Control Performance, Aerodynamic Modeling, and Validation of Coupled Simulation Techniques for Guided Projectile Roll Dynamics

    DTIC Science & Technology

    2014-11-01

    39–44) has been explored in depth in the literature. Of particular interest for this study are investigations into roll control. Isolating the...Control Performance, Aerodynamic Modeling, and Validation of Coupled Simulation Techniques for Guided Projectile Roll Dynamics by Jubaraj...Simulation Techniques for Guided Projectile Roll Dynamics Jubaraj Sahu, Frank Fresconi, and Karen R. Heavey Weapons and Materials Research

  18. Tympanic thermometer performance validation by use of a body-temperature fixed point blackbody

    NASA Astrophysics Data System (ADS)

    Machin, Graham; Simpson, Robert

    2003-04-01

    The use of infrared tympanic thermometers within the medical community (and more generically in the public domain) has recently grown rapidly, displacing more traditional forms of thermometry such as mercury-in-glass. Besides the obvious health concerns over mercury the increase in the use of tympanic thermometers is related to a number of factors such as their speed and relatively non-invasive method of operation. The calibration and testing of such devices is covered by a number of international standards (ASTM1, prEN2, JIS3) which specify the design of calibration blackbodies. However these calibration sources are impractical for day-to-day in-situ validation purposes. In addition several studies (e.g. Modell et al4, Craig et al5) have thrown doubt on the accuracy of tympanic thermometers in clinical use. With this in mind the NPL is developing a practical, portable and robust primary reference fixed point source for tympanic thermometer validation. The aim of this simple device is to give the clinician a rapid way of validating the performance of their tympanic thermometer, enabling the detection of mal-functioning thermometers and giving confidence in the measurement to the clinician (and patient!) at point of use. The reference fixed point operates at a temperature of 36.3 °C (97.3 °F) with a repeatability of approximately +/- 20 mK. The fixed-point design has taken into consideration the optical characteristics of tympanic thermometers enabling wide-angled field of view devices to be successfully tested. The overall uncertainty of the device is estimated to be is less than 0.1°C. The paper gives a description of the fixed point, its design and construction as well as the results to date of validation tests.

  19. The New Millennium Program: Validating Advanced Technologies for Future Space Missions

    NASA Technical Reports Server (NTRS)

    Minning, Charles P.; Luers, Philip

    1999-01-01

    This presentation reviews the activities of the New Millennium Program (NMP) in validating advanced technologies for space missions. The focus of these breakthrough technologies are to enable new capabilities to fulfill the science needs, while reducing costs of future missions. There is a broad spectrum of NMP partners, including government agencies, universities and private industry. The DS-1 was launched on October 24, 1998. Amongst the technologies validated by the NMP on DS-1 are: a Low Power Electronics Experiment, the Power Activation and Switching Module, Multi-Functional Structures. The first two of these technologies are operational and the data analysis is still ongoing. The third program is also operational, and its performance parameters have been verified. The second program, DS-2, was launched January 3 1999. It is expected to impact near Mars southern polar region on 3 December 1999. The technologies used on this mission awaiting validation are an advanced microcontroller, a power microelectronics unit, an evolved water experiment and soil thermal conductivity experiment, Lithium-Thionyl Chloride batteries, the flexible cable interconnect, aeroshell/entry system, and a compact telecom system. EO-1 on schedule for launch in December 1999 carries several technologies to be validated. Amongst these are: a Carbon-Carbon Radiator, an X-band Phased Array Antenna, a pulsed plasma thruster, a wideband advanced recorder processor, an atmospheric corrector, lightweight flexible solar arrays, Advanced Land Imager and the Hyperion instrument

  20. Demonstration and Validation of Two Coat High Performance Coating System for Steel Structures in Corrosive Environments

    DTIC Science & Technology

    2016-12-01

    System for Steel Structures in Corrosive Environments Final Report on Project F12-AR06 Co ns tr uc tio n En gi ne er in g R es ea rc h La bo ra...Prevention and Control Program ERDC/CERL TR-16-27 December 2016 Demonstration and Validation of Two-Coat High- Performance Coating System for Steel ...Performance Coating System for Steel Structures in Corrosive Environments” ERDC/CERL TR-16-27 ii Abstract Department of Defense (DoD) installations