Electrolysis Performance Improvement and Validation Experiment
NASA Technical Reports Server (NTRS)
Schubert, Franz H.
1992-01-01
Viewgraphs on electrolysis performance improvement and validation experiment are presented. Topics covered include: water electrolysis: an ever increasing need/role for space missions; static feed electrolysis (SFE) technology: a concept developed for space applications; experiment objectives: why test in microgravity environment; and experiment description: approach, hardware description, test sequence and schedule.
NASA Astrophysics Data System (ADS)
Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven
2016-04-01
VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data (downscaled values) and metadata (characterizing different aspects of the downscaling methods). This constitutes the largest and most comprehensive to date intercomparison of statistical downscaling methods. Here, we present an overall validation, analyzing marginal and temporal aspects to assess the intrinsic performance and added value of statistical downscaling methods at both annual and seasonal levels. This validation takes into account the different properties/limitations of different approaches and techniques (as reported in the provided metadata) in order to perform a fair comparison. It is pointed out that this experiment alone is not sufficient to evaluate the limitations of (MOS) bias correction techniques. Moreover, it also does not fully validate PP since we don't learn whether we have the right predictors and whether the PP assumption is valid. These problems will be analyzed in the subsequent community-open VALUE experiments 2) and 3), which will be open for participation along the present year.
Garfjeld Roberts, Patrick; Guyver, Paul; Baldwin, Mathew; Akhtar, Kash; Alvand, Abtin; Price, Andrew J; Rees, Jonathan L
2017-02-01
To assess the construct and face validity of ArthroS, a passive haptic VR simulator. A secondary aim was to evaluate the novel performance metrics produced by this simulator. Two groups of 30 participants, each divided into novice, intermediate or expert based on arthroscopic experience, completed three separate tasks on either the knee or shoulder module of the simulator. Performance was recorded using 12 automatically generated performance metrics and video footage of the arthroscopic procedures. The videos were blindly assessed using a validated global rating scale (GRS). Participants completed a survey about the simulator's realism and training utility. This new simulator demonstrated construct validity of its tasks when evaluated against a GRS (p ≤ 0.003 in all cases). Regarding it's automatically generated performance metrics, established outputs such as time taken (p ≤ 0.001) and instrument path length (p ≤ 0.007) also demonstrated good construct validity. However, two-thirds of the proposed 'novel metrics' the simulator reports could not distinguish participants based on arthroscopic experience. Face validity assessment rated the simulator as a realistic and useful tool for trainees, but the passive haptic feedback (a key feature of this simulator) is rated as less realistic. The ArthroS simulator has good task construct validity based on established objective outputs, but some of the novel performance metrics could not distinguish between surgical experience. The passive haptic feedback of the simulator also needs improvement. If simulators could offer automated and validated performance feedback, this would facilitate improvements in the delivery of training by allowing trainees to practise and self-assess.
ERIC Educational Resources Information Center
Coelho, Francisco Antonio, Jr.; Ferreira, Rodrigo Rezende; Paschoal, Tatiane; Faiad, Cristiane; Meneses, Paulo Murce
2015-01-01
The purpose of this study was twofold: to assess evidences of construct validity of the Brazilian Scale of Tutors Competences in the field of Open and Distance Learning and to examine if variables such as professional experience, perception of the student´s learning performance and prior experience influence the development of technical and…
Performance-based comparison of neonatal intubation training outcomes: simulator and live animal.
Andreatta, Pamela B; Klotz, Jessica J; Dooley-Hash, Suzanne L; Hauptman, Joe G; Biddinger, Bea; House, Joseph B
2015-02-01
The purpose of this article was to establish psychometric validity evidence for competency assessment instruments and to evaluate the impact of 2 forms of training on the abilities of clinicians to perform neonatal intubation. To inform the development of assessment instruments, we conducted comprehensive task analyses including each performance domain associated with neonatal intubation. Expert review confirmed content validity. Construct validity was established using the instruments to differentiate between the intubation performance abilities of practitioners (N = 294) with variable experience (novice through expert). Training outcomes were evaluated using a quasi-experimental design to evaluate performance differences between 294 subjects randomly assigned to 1 of 2 training groups. The training intervention followed American Heart Association Pediatric Advanced Life Support and Neonatal Resuscitation Program protocols with hands-on practice using either (1) live feline or (2) simulated feline models. Performance assessment data were captured before and directly following the training. All data were analyzed using analysis of variance with repeated measures and statistical significance set at P < .05. Content validity, reliability, and consistency evidence were established for each assessment instrument. Construct validity for each assessment instrument was supported by significantly higher scores for subjects with greater levels of experience, as compared with those with less experience (P = .000). Overall, subjects performed significantly better in each assessment domain, following the training intervention (P = .000). After controlling for experience level, there were no significant differences among the cognitive, performance, and self-efficacy outcomes between clinicians trained with live animal model or simulator model. Analysis of retention scores showed that simulator trained subjects had significantly higher performance scores after 18 weeks (P = .01) and 52 weeks (P = .001) and cognitive scores after 52 weeks (P = .001). The results of this study demonstrate the feasibility of using valid, reliable assessment instruments to assess clinician competency and self-efficacy in the performance of neonatal intubation. We demonstrated the relative equivalency of live animal and simulation-based models as tools to support acquisition of neonatal intubation skills. Retention of performance abilities was greater for subjects trained using the simulator, likely because it afforded greater opportunity for repeated practice. Outcomes in each assessment area were influenced by the previous intubation experience of participants. This suggests that neonatal intubation training programs could be tailored to the level of provider experience to make efficient use of time and educational resources. Future research focusing on the uses of assessment in the applied clinical environment, as well as identification of optimal training cycles for performance retention, is merited.
Comparative assessment of three standardized robotic surgery training methods.
Hung, Andrew J; Jayaratna, Isuru S; Teruya, Kara; Desai, Mihir M; Gill, Inderbir S; Goh, Alvin C
2013-10-01
To evaluate three standardized robotic surgery training methods, inanimate, virtual reality and in vivo, for their construct validity. To explore the concept of cross-method validity, where the relative performance of each method is compared. Robotic surgical skills were prospectively assessed in 49 participating surgeons who were classified as follows: 'novice/trainee': urology residents, previous experience <30 cases (n = 38) and 'experts': faculty surgeons, previous experience ≥30 cases (n = 11). Three standardized, validated training methods were used: (i) structured inanimate tasks; (ii) virtual reality exercises on the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA, USA); and (iii) a standardized robotic surgical task in a live porcine model with performance graded by the Global Evaluative Assessment of Robotic Skills (GEARS) tool. A Kruskal-Wallis test was used to evaluate performance differences between novices and experts (construct validity). Spearman's correlation coefficient (ρ) was used to measure the association of performance across inanimate, simulation and in vivo methods (cross-method validity). Novice and expert surgeons had previously performed a median (range) of 0 (0-20) and 300 (30-2000) robotic cases, respectively (P < 0.001). Construct validity: experts consistently outperformed residents with all three methods (P < 0.001). Cross-method validity: overall performance of inanimate tasks significantly correlated with virtual reality robotic performance (ρ = -0.7, P < 0.001) and in vivo robotic performance based on GEARS (ρ = -0.8, P < 0.0001). Virtual reality performance and in vivo tissue performance were also found to be strongly correlated (ρ = 0.6, P < 0.001). We propose the novel concept of cross-method validity, which may provide a method of evaluating the relative value of various forms of skills education and assessment. We externally confirmed the construct validity of each featured training tool. © 2013 BJU International.
SeaSat-A Satellite Scatterometer (SASS) Validation and Experiment Plan
NASA Technical Reports Server (NTRS)
Schroeder, L. C. (Editor)
1978-01-01
This plan was generated by the SeaSat-A satellite scatterometer experiment team to define the pre-and post-launch activities necessary to conduct sensor validation and geophysical evaluation. Details included are an instrument and experiment description/performance requirements, success criteria, constraints, mission requirements, data processing requirement and data analysis responsibilities.
NASA Technical Reports Server (NTRS)
Sebok, Angelia; Wickens, Christopher; Sargent, Robert
2015-01-01
One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.
Validity of clinical color vision tests for air traffic control specialists.
DOT National Transportation Integrated Search
1992-10-01
An experiment on the relationship between aeromedical color vision screening test performance and performance on color-dependent tasks of Air Traffic Control Specialists was replicated to expand the data base supporting the job-related validity of th...
Rivard, Justin D; Vergis, Ashley S; Unger, Bertram J; Hardy, Krista M; Andrew, Chris G; Gillman, Lawrence M; Park, Jason
2014-06-01
Computer-based surgical simulators capture a multitude of metrics based on different aspects of performance, such as speed, accuracy, and movement efficiency. However, without rigorous assessment, it may be unclear whether all, some, or none of these metrics actually reflect technical skill, which can compromise educational efforts on these simulators. We assessed the construct validity of individual performance metrics on the LapVR simulator (Immersion Medical, San Jose, CA, USA) and used these data to create task-specific summary metrics. Medical students with no prior laparoscopic experience (novices, N = 12), junior surgical residents with some laparoscopic experience (intermediates, N = 12), and experienced surgeons (experts, N = 11) all completed three repetitions of four LapVR simulator tasks. The tasks included three basic skills (peg transfer, cutting, clipping) and one procedural skill (adhesiolysis). We selected 36 individual metrics on the four tasks that assessed six different aspects of performance, including speed, motion path length, respect for tissue, accuracy, task-specific errors, and successful task completion. Four of seven individual metrics assessed for peg transfer, six of ten metrics for cutting, four of nine metrics for clipping, and three of ten metrics for adhesiolysis discriminated between experience levels. Time and motion path length were significant on all four tasks. We used the validated individual metrics to create summary equations for each task, which successfully distinguished between the different experience levels. Educators should maintain some skepticism when reviewing the plethora of metrics captured by computer-based simulators, as some but not all are valid. We showed the construct validity of a limited number of individual metrics and developed summary metrics for the LapVR. The summary metrics provide a succinct way of assessing skill with a single metric for each task, but require further validation.
Progress Towards a Microgravity CFD Validation Study Using the ISS SPHERES-SLOSH Experiment
NASA Technical Reports Server (NTRS)
Storey, Jedediah M.; Kirk, Daniel; Marsell, Brandon (Editor); Schallhorn, Paul (Editor)
2017-01-01
Understanding, predicting, and controlling fluid slosh dynamics is critical to safety and improving performance of space missions when a significant percentage of the spacecrafts mass is a liquid. Computational fluid dynamics simulations can be used to predict the dynamics of slosh, but these programs require extensive validation. Many CFD programs have been validated by slosh experiments using various fluids in earth gravity, but prior to the ISS SPHERES-Slosh experiment1, little experimental data for long-duration, zero-gravity slosh existed. This paper presents the current status of an ongoing CFD validation study using the ISS SPHERES-Slosh experimental data.
Progress Towards a Microgravity CFD Validation Study Using the ISS SPHERES-SLOSH Experiment
NASA Technical Reports Server (NTRS)
Storey, Jed; Kirk, Daniel (Editor); Marsell, Brandon (Editor); Schallhorn, Paul (Editor)
2017-01-01
Understanding, predicting, and controlling fluid slosh dynamics is critical to safety and improving performance of space missions when a significant percentage of the spacecrafts mass is a liquid. Computational fluid dynamics simulations can be used to predict the dynamics of slosh, but these programs require extensive validation. Many CFD programs have been validated by slosh experiments using various fluids in earth gravity, but prior to the ISS SPHERES-Slosh experiment, little experimental data for long-duration, zero-gravity slosh existed. This paper presents the current status of an ongoing CFD validation study using the ISS SPHERES-Slosh experimental data.
Validation of a unique concept for a low-cost, lightweight space-deployable antenna structure
NASA Technical Reports Server (NTRS)
Freeland, R. E.; Bilyeu, G. D.; Veal, G. R.
1993-01-01
An experiment conducted in the framework of a NASA In-Space Technology Experiments Program based on a concept of inflatable deployable structures is described. The concept utilizes very low inflation pressure to maintain the required geometry on orbit and gravity-induced deflection of the structure precludes any meaningful ground-based demonstrations of functions performance. The experiment is aimed at validating and characterizing the mechanical functional performance of a 14-m-diameter inflatable deployable reflector antenna structure in the orbital operational environment. Results of the experiment are expected to significantly reduce the user risk associated with using large space-deployable antennas by demonstrating the functional performance of a concept that meets the criteria for low-cost, lightweight, and highly reliable space-deployable structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernard, David; Leconte, Pierre; Destouches, Christophe
2015-07-01
Two recent papers justified a new experimental program to give a new basis for the validation of {sup 238}U nuclear data, namely neutron induced inelastic scattering and transport codes at neutron fission energies. The general idea is to perform a neutron transmission experiment through natural uranium material. As shown by Hans Bethe, neutron transmissions measured by dosimetric responses are linked to inelastic cross sections. This paper describes the principle and the results of such an experience called EXCALIBUR performed recently (January and October 2014) at the CALIBAN reactor facility. (authors)
The Role of Structural Models in the Solar Sail Flight Validation Process
NASA Technical Reports Server (NTRS)
Johnston, John D.
2004-01-01
NASA is currently soliciting proposals via the New Millennium Program ST-9 opportunity for a potential Solar Sail Flight Validation (SSFV) experiment to develop and operate in space a deployable solar sail that can be steered and provides measurable acceleration. The approach planned for this experiment is to test and validate models and processes for solar sail design, fabrication, deployment, and flight. These models and processes would then be used to design, fabricate, and operate scaleable solar sails for future space science missions. There are six validation objectives planned for the ST9 SSFV experiment: 1) Validate solar sail design tools and fabrication methods; 2) Validate controlled deployment; 3) Validate in space structural characteristics (focus of poster); 4) Validate solar sail attitude control; 5) Validate solar sail thrust performance; 6) Characterize the sail's electromagnetic interaction with the space environment. This poster presents a top-level assessment of the role of structural models in the validation process for in-space structural characteristics.
Observing System Simulation Experiments
NASA Technical Reports Server (NTRS)
Prive, Nikki
2015-01-01
This presentation gives an overview of Observing System Simulation Experiments (OSSEs). The components of an OSSE are described, along with discussion of the process for validating, calibrating, and performing experiments. a.
Validation of multiprocessor systems
NASA Technical Reports Server (NTRS)
Siewiorek, D. P.; Segall, Z.; Kong, T.
1982-01-01
Experiments that can be used to validate fault free performance of multiprocessor systems in aerospace systems integrating flight controls and avionics are discussed. Engineering prototypes for two fault tolerant multiprocessors are tested.
The Effect of Aptitude and Experience on Mechanical Job Performance.
ERIC Educational Resources Information Center
Mayberry, Paul W.; Carey, Neil B.
1997-01-01
The validity of the Armed Services Vocational Aptitude Battery (ASVAB) in predicting mechanical job performance was studied with 891 automotive and 522 helicopter mechanics. The mechanical maintenance component of the ASVAB predicted hands-on performance, job knowledge, and training grades quite well, but experience was more predictive of…
NASA Astrophysics Data System (ADS)
Banica, M. C.; Chun, J.; Scheuermann, T.; Weigand, B.; Wolfersdorf, J. v.
2009-01-01
Scramjet powered vehicles can decrease costs for access to space but substantial obstacles still exist in their realization. For example, experiments in the relevant Mach number regime are difficult to perform and flight testing is expensive. Therefore, numerical methods are often employed for system layout but they require validation against experimental data. Here, we validate the commercial code CFD++ against experimental results for hydrogen combustion in the supersonic combustion facility of the Institute of Aerospace Thermodynamics (ITLR) at the Universität Stuttgart. Fuel is injected through a lobed a strut injector, which provides rapid mixing. Our numerical data shows reasonable agreement with experiments. We further investigate effects of varying equivalence ratios on several important performance parameters.
NASA Technical Reports Server (NTRS)
Cayeux, P.; Raballand, F.; Borde, J.; Berges, J.-C.; Meyssignac, B.
2007-01-01
Within the framework of a partnership agreement, EADS ASTRIUM has worked since June 2006 for the CNES formation flying experiment on the PRISMA mission. EADS ASTRIUM is responsible for the anti-collision function. This responsibility covers the design and the development of the function as a Matlab/Simulink library, as well as its functional validation and performance assessment. PRISMA is a technology in-orbit testbed mission from the Swedish National Space Board, mainly devoted to formation flying demonstration. PRISMA is made of two micro-satellites that will be launched in 2009 on a quasi-circular SSO at about 700 km of altitude. The CNES FFIORD experiment embedded on PRISMA aims at flight validating an FFRF sensor designed for formation control, and assessing its performances, in preparation to future formation flying missions such as Simbol X; FFIORD aims as well at validating various typical autonomous rendezvous and formation guidance and control algorithms. This paper presents the principles of the collision avoidance function developed by EADS ASTRIUM for FFIORD; three kinds of maneuvers were implemented and are presented in this paper with their performances.
Validation Experiments for Spent-Fuel Dry-Cask In-Basket Convection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Barton L.
2016-08-16
This work consisted of the following major efforts; 1. Literature survey on validation of external natural convection; 2. Design the experiment; 3. Build the experiment; 4. Run the experiment; 5. Collect results; 6. Disseminate results; and 7. Perform a CFD validation study using the results. We note that while all tasks are complete, some deviations from the original plan were made. Specifically, geometrical changes in the parameter space were skipped in favor of flow condition changes, which were found to be much more practical to implement. Changing the geometry required new as-built measurements, which proved extremely costly and impractical givenmore » the time and funds available« less
ERIC Educational Resources Information Center
Özenç, Emine Gül; Dogan, M. Cihangir
2014-01-01
This study aims to perform a validity-reliability test by developing the Functional Literacy Experience Scale based upon Ecological Theory (FLESBUET) for primary education students. The study group includes 209 fifth grade students at Sabri Taskin Primary School in the Kartal District of Istanbul, Turkey during the 2010-2011 academic year.…
Learning to recognize rat social behavior: Novel dataset and cross-dataset application.
Lorbach, Malte; Kyriakou, Elisavet I; Poppe, Ronald; van Dam, Elsbeth A; Noldus, Lucas P J J; Veltkamp, Remco C
2018-04-15
Social behavior is an important aspect of rodent models. Automated measuring tools that make use of video analysis and machine learning are an increasingly attractive alternative to manual annotation. Because machine learning-based methods need to be trained, it is important that they are validated using data from different experiment settings. To develop and validate automated measuring tools, there is a need for annotated rodent interaction datasets. Currently, the availability of such datasets is limited to two mouse datasets. We introduce the first, publicly available rat social interaction dataset, RatSI. We demonstrate the practical value of the novel dataset by using it as the training set for a rat interaction recognition method. We show that behavior variations induced by the experiment setting can lead to reduced performance, which illustrates the importance of cross-dataset validation. Consequently, we add a simple adaptation step to our method and improve the recognition performance. Most existing methods are trained and evaluated in one experimental setting, which limits the predictive power of the evaluation to that particular setting. We demonstrate that cross-dataset experiments provide more insight in the performance of classifiers. With our novel, public dataset we encourage the development and validation of automated recognition methods. We are convinced that cross-dataset validation enhances our understanding of rodent interactions and facilitates the development of more sophisticated recognition methods. Combining them with adaptation techniques may enable us to apply automated recognition methods to a variety of animals and experiment settings. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Lievens, Filip; Patterson, Fiona
2011-01-01
In high-stakes selection among candidates with considerable domain-specific knowledge and experience, investigations of whether high-fidelity simulations (assessment centers; ACs) have incremental validity over low-fidelity simulations (situational judgment tests; SJTs) are lacking. Therefore, this article integrates research on the validity of…
NASA Astrophysics Data System (ADS)
Nir, A.; Doughty, C.; Tsang, C. F.
Validation methods which developed in the context of deterministic concepts of past generations often cannot be directly applied to environmental problems, which may be characterized by limited reproducibility of results and highly complex models. Instead, validation is interpreted here as a series of activities, including both theoretical and experimental tests, designed to enhance our confidence in the capability of a proposed model to describe some aspect of reality. We examine the validation process applied to a project concerned with heat and fluid transport in porous media, in which mathematical modeling, simulation, and results of field experiments are evaluated in order to determine the feasibility of a system for seasonal thermal energy storage in shallow unsaturated soils. Technical details of the field experiments are not included, but appear in previous publications. Validation activities are divided into three stages. The first stage, carried out prior to the field experiments, is concerned with modeling the relevant physical processes, optimization of the heat-exchanger configuration and the shape of the storage volume, and multi-year simulation. Subjects requiring further theoretical and experimental study are identified at this stage. The second stage encompasses the planning and evaluation of the initial field experiment. Simulations are made to determine the experimental time scale and optimal sensor locations. Soil thermal parameters and temperature boundary conditions are estimated using an inverse method. Then results of the experiment are compared with model predictions using different parameter values and modeling approximations. In the third stage, results of an experiment performed under different boundary conditions are compared to predictions made by the models developed in the second stage. Various aspects of this theoretical and experimental field study are described as examples of the verification and validation procedure. There is no attempt to validate a specific model, but several models of increasing complexity are compared with experimental results. The outcome is interpreted as a demonstration of the paradigm proposed by van der Heijde, 26 that different constituencies have different objectives for the validation process and therefore their acceptance criteria differ also.
ERIC Educational Resources Information Center
Ramirez, Pablo C.; Jimenez-Silva, Margarita
2015-01-01
In this article the authors draw from culturally responsive teaching and multicultural education to describe performance poetry as an effective strategy for validating secondary aged Latino youths' lived experiences. Supported by teacher modeling and the incorporation of community poets, students created and shared their own powerful poems that…
A Preliminary Assessment of the SURF Reactive Burn Model Implementation in FLAG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Carl Edward; McCombe, Ryan Patrick; Carver, Kyle
Properly validated and calibrated reactive burn models (RBM) can be useful engineering tools for assessing high explosive performance and safety. Experiments with high explosives are expensive. Inexpensive RBM calculations are increasingly relied on for predictive analysis for performance and safety. This report discusses the validation of Menikoff and Shaw’s SURF reactive burn model, which has recently been implemented in the FLAG code. The LANL Gapstick experiment is discussed as is its’ utility in reactive burn model validation. Data obtained from pRad for the LT-63 series is also presented along with FLAG simulations using SURF for both PBX 9501 and PBXmore » 9502. Calibration parameters for both explosives are presented.« less
Development, Validation and Integration of the ATLAS Trigger System Software in Run 2
NASA Astrophysics Data System (ADS)
Keyes, Robert; ATLAS Collaboration
2017-10-01
The trigger system of the ATLAS detector at the LHC is a combination of hardware, firmware, and software, associated to various sub-detectors that must seamlessly cooperate in order to select one collision of interest out of every 40,000 delivered by the LHC every millisecond. These proceedings discuss the challenges, organization and work flow of the ongoing trigger software development, validation, and deployment. The goal of this development is to ensure that the most up-to-date algorithms are used to optimize the performance of the experiment. The goal of the validation is to ensure the reliability and predictability of the software performance. Integration tests are carried out to ensure that the software deployed to the online trigger farm during data-taking run as desired. Trigger software is validated by emulating online conditions using a benchmark run and mimicking the reconstruction that occurs during normal data-taking. This exercise is computationally demanding and thus runs on the ATLAS high performance computing grid with high priority. Performance metrics ranging from low-level memory and CPU requirements, to distributions and efficiencies of high-level physics quantities are visualized and validated by a range of experts. This is a multifaceted critical task that ties together many aspects of the experimental effort and thus directly influences the overall performance of the ATLAS experiment.
Forensic Uncertainty Quantification of Explosive Dispersal of Particles
NASA Astrophysics Data System (ADS)
Hughes, Kyle; Park, Chanyoung; Haftka, Raphael; Kim, Nam-Ho
2017-06-01
In addition to the numerical challenges of simulating the explosive dispersal of particles, validation of the simulation is often plagued with poor knowledge of the experimental conditions. The level of experimental detail required for validation is beyond what is usually included in the literature. This presentation proposes the use of forensic uncertainty quantification (UQ) to investigate validation-quality experiments to discover possible sources of uncertainty that may have been missed in initial design of experiments or under-reported. The current experience of the authors has found that by making an analogy to crime scene investigation when looking at validation experiments, valuable insights may be gained. One examines all the data and documentation provided by the validation experimentalists, corroborates evidence, and quantifies large sources of uncertainty a posteriori with empirical measurements. In addition, it is proposed that forensic UQ may benefit from an independent investigator to help remove possible implicit biases and increases the likelihood of discovering unrecognized uncertainty. Forensic UQ concepts will be discussed and then applied to a set of validation experiments performed at Eglin Air Force Base. This work was supported in part by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program.
VDA, a Method of Choosing a Better Algorithm with Fewer Validations
Kluger, Yuval
2011-01-01
The multitude of bioinformatics algorithms designed for performing a particular computational task presents end-users with the problem of selecting the most appropriate computational tool for analyzing their biological data. The choice of the best available method is often based on expensive experimental validation of the results. We propose an approach to design validation sets for method comparison and performance assessment that are effective in terms of cost and discrimination power. Validation Discriminant Analysis (VDA) is a method for designing a minimal validation dataset to allow reliable comparisons between the performances of different algorithms. Implementation of our VDA approach achieves this reduction by selecting predictions that maximize the minimum Hamming distance between algorithmic predictions in the validation set. We show that VDA can be used to correctly rank algorithms according to their performances. These results are further supported by simulations and by realistic algorithmic comparisons in silico. VDA is a novel, cost-efficient method for minimizing the number of validation experiments necessary for reliable performance estimation and fair comparison between algorithms. Our VDA software is available at http://sourceforge.net/projects/klugerlab/files/VDA/ PMID:22046256
Validation of a wireless modular monitoring system for structures
NASA Astrophysics Data System (ADS)
Lynch, Jerome P.; Law, Kincho H.; Kiremidjian, Anne S.; Carryer, John E.; Kenny, Thomas W.; Partridge, Aaron; Sundararajan, Arvind
2002-06-01
A wireless sensing unit for use in a Wireless Modular Monitoring System (WiMMS) has been designed and constructed. Drawing upon advanced technological developments in the areas of wireless communications, low-power microprocessors and micro-electro mechanical system (MEMS) sensing transducers, the wireless sensing unit represents a high-performance yet low-cost solution to monitoring the short-term and long-term performance of structures. A sophisticated reduced instruction set computer (RISC) microcontroller is placed at the core of the unit to accommodate on-board computations, measurement filtering and data interrogation algorithms. The functionality of the wireless sensing unit is validated through various experiments involving multiple sensing transducers interfaced to the sensing unit. In particular, MEMS-based accelerometers are used as the primary sensing transducer in this study's validation experiments. A five degree of freedom scaled test structure mounted upon a shaking table is employed for system validation.
Validation results of satellite mock-up capturing experiment using nets
NASA Astrophysics Data System (ADS)
Medina, Alberto; Cercós, Lorenzo; Stefanescu, Raluca M.; Benvenuto, Riccardo; Pesce, Vincenzo; Marcon, Marco; Lavagna, Michèle; González, Iván; Rodríguez López, Nuria; Wormnes, Kjetil
2017-05-01
The PATENDER activity (Net parametric characterization and parabolic flight), funded by the European Space Agency (ESA) via its Clean Space initiative, was aiming to validate a simulation tool for designing nets for capturing space debris. This validation has been performed through a set of different experiments under microgravity conditions where a net was launched capturing and wrapping a satellite mock-up. This paper presents the architecture of the thrown-net dynamics simulator together with the set-up of the deployment experiment and its trajectory reconstruction results on a parabolic flight (Novespace A-310, June 2015). The simulator has been implemented within the Blender framework in order to provide a highly configurable tool, able to reproduce different scenarios for Active Debris Removal missions. The experiment has been performed over thirty parabolas offering around 22 s of zero-g conditions. Flexible meshed fabric structure (the net) ejected from a container and propelled by corner masses (the bullets) arranged around its circumference have been launched at different initial velocities and launching angles using a pneumatic-based dedicated mechanism (representing the chaser satellite) against a target mock-up (the target satellite). High-speed motion cameras were recording the experiment allowing 3D reconstruction of the net motion. The net knots have been coloured to allow the images post-process using colour segmentation, stereo matching and iterative closest point (ICP) for knots tracking. The final objective of the activity was the validation of the net deployment and wrapping simulator using images recorded during the parabolic flight. The high-resolution images acquired have been post-processed to determine accurately the initial conditions and generate the reference data (position and velocity of all knots of the net along its deployment and wrapping of the target mock-up) for the simulator validation. The simulator has been properly configured according to the parabolic flight scenario, and executed in order to generate the validation data. Both datasets have been compared according to different metrics in order to perform the validation of the PATENDER simulator.
Multi-Evaporator Miniature Loop Heat Pipe for Small Spacecraft Thermal Control
NASA Technical Reports Server (NTRS)
Ku, Jentung; Ottenstein, Laura; Douglas, Donya
2008-01-01
This paper presents the development of the Thermal Loop experiment under NASA's New Millennium Program Space Technology 8 (ST8) Project. The Thermal Loop experiment was originally planned for validating in space an advanced heat transport system consisting of a miniature loop heat pipe (MLHP) with multiple evaporators and multiple condensers. Details of the thermal loop concept, technical advances and benefits, Level 1 requirements and the technology validation approach are described. An MLHP breadboard has been built and tested in the laboratory and thermal vacuum environments, and has demonstrated excellent performance that met or exceeded the design requirements. The MLHP retains all features of state-of-the-art loop heat pipes and offers additional advantages to enhance the functionality, performance, versatility, and reliability of the system. In addition, an analytical model has been developed to simulate the steady state and transient operation of the MHLP, and the model predictions agreed very well with experimental results. A protoflight MLHP has been built and is being tested in a thermal vacuum chamber to validate its performance and technical readiness for a flight experiment.
A Performance Management Framework for Civil Engineering
1990-09-01
cultural change. A non - equivalent control group design was chosen to augment the case analysis. Figure 3.18 shows the form of the quasi-experiment. The...The non - equivalent control group design controls the following obstacles to internal validity: history, maturation, testing, and instrumentation. The...and Stanley, 1963:48,50) Table 7. Validity of Quasi-Experiment The non - equivalent control group experimental design controls the following obstacles to
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mubarak, Misbah; Ross, Robert B.
This technical report describes the experiments performed to validate the MPI performance measurements reported by the CODES dragonfly network simulation with the Theta Cray XC system at the Argonne Leadership Computing Facility (ALCF).
Experience with Aero- and Fluid-Dynamic Testing for Engineering and CFD Validation
NASA Technical Reports Server (NTRS)
Ross, James C.
2016-01-01
Ever since computations have been used to simulate aerodynamics the need to ensure that the computations adequately represent real life has followed. Many experiments have been performed specifically for validation and as computational methods have improved, so have the validation experiments. Validation is also a moving target because computational methods improve requiring validation for the new aspect of flow physics that the computations aim to capture. Concurrently, new measurement techniques are being developed that can help capture more detailed flow features pressure sensitive paint (PSP) and particle image velocimetry (PIV) come to mind. This paper will present various wind-tunnel tests the author has been involved with and how they were used for validation of various kinds of CFD. A particular focus is the application of advanced measurement techniques to flow fields (and geometries) that had proven to be difficult to predict computationally. Many of these difficult flow problems arose from engineering and development problems that needed to be solved for a particular vehicle or research program. In some cases the experiments required to solve the engineering problems were refined to provide valuable CFD validation data in addition to the primary engineering data. All of these experiments have provided physical insight and validation data for a wide range of aerodynamic and acoustic phenomena for vehicles ranging from tractor-trailers to crewed spacecraft.
Stefanidis, Dimitrios; Hope, William W; Scott, Daniel J
2011-07-01
The value of robotic assistance for intracorporeal suturing is not well defined. We compared robotic suturing with laparoscopic suturing on the FLS model with a large cohort of surgeons. Attendees (n=117) at the SAGES 2006 Learning Center robotic station placed intracorporeal sutures on the FLS box-trainer model using conventional laparoscopic instruments and the da Vinci® robot. Participant performance was recorded using a validated objective scoring system, and a questionnaire regarding demographics, task workload, and suturing modality preference was completed. Construct validity for both tasks was assessed by comparing the performance scores of subjects with various levels of experience. A validated questionnaire was used for workload measurement. Of the participants, 84% had prior laparoscopic and 10% prior robotic suturing experience. Within the allotted time, 83% of participants completed the suturing task laparoscopically and 72% with the robot. Construct validity was demonstrated for both simulated tasks according to the participants' advanced laparoscopic experience, laparoscopic suturing experience, and self-reported laparoscopic suturing ability (p<0.001 for all) and according to prior robotic experience, robotic suturing experience, and self-reported robotic suturing ability (p<0.001 for all), respectively. While participants achieved higher suturing scores with standard laparoscopy compared with the robot (84±75 vs. 56±63, respectively; p<0.001), they found the laparoscopic task more physically demanding (NASA score 13±5 vs. 10±5, respectively; p<0.001) and favored the robot as their method of choice for intracorporeal suturing (62 vs. 38%, respectively; p<0.01). Construct validity was demonstrated for robotic suturing on the FLS model. Suturing scores were higher using standard laparoscopy likely as a result of the participants' greater experience with laparoscopic suturing versus robotic suturing. Robotic assistance decreases the physical demand of intracorporeal suturing compared with conventional laparoscopy and, in this study, was the preferred suturing method by most surgeons. Curricula for robotic suturing training need to be developed.
Benchmarking the ATLAS software through the Kit Validation engine
NASA Astrophysics Data System (ADS)
De Salvo, Alessandro; Brasolin, Franco
2010-04-01
The measurement of the experiment software performance is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementation. In this work we present the benchmark techniques used to measure the ATLAS software performance through the ATLAS offline testing engine Kit Validation and the online portal Global Kit Validation. The performance measurements, the data collection, the online analysis and display of the results will be presented. The results of the measurement on different platforms and architectures will be shown, giving a full report on the CPU power and memory consumption of the Monte Carlo generation, simulation, digitization and reconstruction of the most CPU-intensive channels. The impact of the multi-core computing on the ATLAS software performance will also be presented, comparing the behavior of different architectures when increasing the number of concurrent processes. The benchmark techniques described in this paper have been used in the HEPiX group since the beginning of 2008 to help defining the performance metrics for the High Energy Physics applications, based on the real experiment software.
An Experiment on the Limits of Quantum Electro-dynamics
DOE R&D Accomplishments Database
Barber, W. C.; Richter, B.; Panofsky, W. K. H.; O'Neill, G. K.; Gittelman, B.
1959-06-01
The limitations of previously performed or suggested electrodynamic cutoff experiments are reviewed, and an electron-electron scattering experiment to be performed with storage rings to investigate further the limits of the validity of quantum electrodynamics is described. The foreseen experimental problems are discussed, and the results of the associated calculations are given. The parameters and status of the equipment are summarized. (D.C.W.)
Development of self and peer performance assessment on iodometric titration experiment
NASA Astrophysics Data System (ADS)
Nahadi; Siswaningsih, W.; Kusumaningtyas, H.
2018-05-01
This study aims to describe the process in developing of reliable and valid assessment to measure students’ performance on iodometric titration and the effect of the self and peer assessment on students’ performance. The self and peer-instrument provides valuable feedback for the student performance improvement. The developed assessment contains rubric and task for facilitating self and peer assessment. The participants are 24 students at the second-grade student in certain vocational high school in Bandung. The participants divided into two groups. The first 12 students involved in the validity test of the developed assessment, while the remain 12 students participated for the reliability test. The content validity was evaluated based on the judgment experts. Test result of content validity based on judgment expert show that the developed performance assessment instrument categorized as valid on each task with the realibity classified as very good. Analysis of the impact of the self and peer assessment implementation showed that the peer instrument supported the self assessment.
Bayesian cross-entropy methodology for optimal design of validation experiments
NASA Astrophysics Data System (ADS)
Jiang, X.; Mahadevan, S.
2006-07-01
An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.
Fundamental arthroscopic skill differentiation with virtual reality simulation.
Rose, Kelsey; Pedowitz, Robert
2015-02-01
The purpose of this study was to investigate the use and validity of virtual reality modules as part of the educational approach to mastering arthroscopy in a safe environment by assessing the ability to distinguish between experience levels. Additionally, the study aimed to evaluate whether experts have greater ambidexterity than do novices. Three virtual reality modules (Swemac/Augmented Reality Systems, Linkoping, Sweden) were created to test fundamental arthroscopic skills. Thirty participants-10 experts consisting of faculty, 10 intermediate participants consisting of orthopaedic residents, and 10 novices consisting of medical students-performed each exercise. Steady and Telescope was designed to train centering and image stability. Steady and Probe was designed to train basic triangulation. Track and Moving Target was designed to train coordinated motions of arthroscope and probe. Metrics reflecting speed, accuracy, and efficiency of motion were used to measure construct validity. Steady and Probe and Track a Moving Target both exhibited construct validity, with better performance by experts and intermediate participants than by novices (P < .05), whereas Steady and Telescope did not show validity. There was an overall trend toward better ambidexterity as a function of greater surgical experience, with experts consistently more proficient than novices throughout all 3 modules. This study represents a new way to assess basic arthroscopy skills using virtual reality modules developed through task deconstruction. Participants with the most arthroscopic experience performed better and were more consistent than novices on all 3 virtual reality modules. Greater arthroscopic experience correlates with more symmetry of ambidextrous performance. However, further adjustment of the modules may better simulate fundamental arthroscopic skills and discriminate between experience levels. Arthroscopy training is a critical element of orthopaedic surgery resident training. Developing techniques to safely and effectively train these skills is critical for patient safety and resident education. Copyright © 2015 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Validating a Geographical Image Retrieval System.
ERIC Educational Resources Information Center
Zhu, Bin; Chen, Hsinchun
2000-01-01
Summarizes a prototype geographical image retrieval system that demonstrates how to integrate image processing and information analysis techniques to support large-scale content-based image retrieval. Describes an experiment to validate the performance of this image retrieval system against that of human subjects by examining similarity analysis…
Fundamentals of endoscopic surgery: creation and validation of the hands-on test.
Vassiliou, Melina C; Dunkin, Brian J; Fried, Gerald M; Mellinger, John D; Trus, Thadeus; Kaneva, Pepa; Lyons, Calvin; Korndorffer, James R; Ujiki, Michael; Velanovich, Vic; Kochman, Michael L; Tsuda, Shawn; Martinez, Jose; Scott, Daniel J; Korus, Gary; Park, Adrian; Marks, Jeffrey M
2014-03-01
The Fundamentals of Endoscopic Surgery™ (FES) program consists of online materials and didactic and skills-based tests. All components were designed to measure the skills and knowledge required to perform safe flexible endoscopy. The purpose of this multicenter study was to evaluate the reliability and validity of the hands-on component of the FES examination, and to establish the pass score. Expert endoscopists identified the critical skill set required for flexible endoscopy. They were then modeled in a virtual reality simulator (GI Mentor™ II, Simbionix™ Ltd., Airport City, Israel) to create five tasks and metrics. Scores were designed to measure both speed and precision. Validity evidence was assessed by correlating performance with self-reported endoscopic experience (surgeons and gastroenterologists [GIs]). Internal consistency of each test task was assessed using Cronbach's alpha. Test-retest reliability was determined by having the same participant perform the test a second time and comparing their scores. Passing scores were determined by a contrasting groups methodology and use of receiver operating characteristic curves. A total of 160 participants (17 % GIs) performed the simulator test. Scores on the five tasks showed good internal consistency reliability and all had significant correlations with endoscopic experience. Total FES scores correlated 0.73, with participants' level of endoscopic experience providing evidence of their validity, and their internal consistency reliability (Cronbach's alpha) was 0.82. Test-retest reliability was assessed in 11 participants, and the intraclass correlation was 0.85. The passing score was determined and is estimated to have a sensitivity (true positive rate) of 0.81 and a 1-specificity (false positive rate) of 0.21. The FES hands-on skills test examines the basic procedural components required to perform safe flexible endoscopy. It meets rigorous standards of reliability and validity required for high-stakes examinations, and, together with the knowledge component, may help contribute to the definition and determination of competence in endoscopy.
A user-targeted synthesis of the VALUE perfect predictor experiment
NASA Astrophysics Data System (ADS)
Maraun, Douglas; Widmann, Martin; Gutierrez, Jose; Kotlarski, Sven; Hertig, Elke; Wibig, Joanna; Rössler, Ole; Huth, Radan
2016-04-01
VALUE is an open European network to validate and compare downscaling methods for climate change research. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. VALUE's main approach to validation is user-focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. We consider different aspects: (1) marginal aspects such as mean, variance and extremes; (2) temporal aspects such as spell length characteristics; (3) spatial aspects such as the de-correlation length of precipitation extremes; and multi-variate aspects such as the interplay of temperature and precipitation or scale-interactions. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur. Experiment 1 (perfect predictors): what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Experiment 2 (Global climate model predictors): how is the overall representation of regional climate, including errors inherited from global climate models? Experiment 3 (pseudo reality): do methods fail in representing regional climate change? Here, we present a user-targeted synthesis of the results of the first VALUE experiment. In this experiment, downscaling methods are driven with ERA-Interim reanalysis data to eliminate global climate model errors, over the period 1979-2008. As reference data we use, depending on the question addressed, (1) observations from 86 meteorological stations distributed across Europe; (2) gridded observations at the corresponding 86 locations or (3) gridded spatially extended observations for selected European regions. With more than 40 contributing methods, this study is the most comprehensive downscaling inter-comparison project so far. The results clearly indicate that for several aspects, the downscaling skill varies considerably between different methods. For specific purposes, some methods can therefore clearly be excluded.
Hubert, C; Houari, S; Rozet, E; Lebrun, P; Hubert, Ph
2015-05-22
When using an analytical method, defining an analytical target profile (ATP) focused on quantitative performance represents a key input, and this will drive the method development process. In this context, two case studies were selected in order to demonstrate the potential of a quality-by-design (QbD) strategy when applied to two specific phases of the method lifecycle: the pre-validation study and the validation step. The first case study focused on the improvement of a liquid chromatography (LC) coupled to mass spectrometry (MS) stability-indicating method by the means of the QbD concept. The design of experiments (DoE) conducted during the optimization step (i.e. determination of the qualitative design space (DS)) was performed a posteriori. Additional experiments were performed in order to simultaneously conduct the pre-validation study to assist in defining the DoE to be conducted during the formal validation step. This predicted protocol was compared to the one used during the formal validation. A second case study based on the LC/MS-MS determination of glucosamine and galactosamine in human plasma was considered in order to illustrate an innovative strategy allowing the QbD methodology to be incorporated during the validation phase. An operational space, defined by the qualitative DS, was considered during the validation process rather than a specific set of working conditions as conventionally performed. Results of all the validation parameters conventionally studied were compared to those obtained with this innovative approach for glucosamine and galactosamine. Using this strategy, qualitative and quantitative information were obtained. Consequently, an analyst using this approach would be able to select with great confidence several working conditions within the operational space rather than a given condition for the routine use of the method. This innovative strategy combines both a learning process and a thorough assessment of the risk involved. Copyright © 2015 Elsevier B.V. All rights reserved.
Observations on CFD Verification and Validation from the AIAA Drag Prediction Workshops
NASA Technical Reports Server (NTRS)
Morrison, Joseph H.; Kleb, Bil; Vassberg, John C.
2014-01-01
The authors provide observations from the AIAA Drag Prediction Workshops that have spanned over a decade and from a recent validation experiment at NASA Langley. These workshops provide an assessment of the predictive capability of forces and moments, focused on drag, for transonic transports. It is very difficult to manage the consistency of results in a workshop setting to perform verification and validation at the scientific level, but it may be sufficient to assess it at the level of practice. Observations thus far: 1) due to simplifications in the workshop test cases, wind tunnel data are not necessarily the “correct” results that CFD should match, 2) an average of core CFD data are not necessarily a better estimate of the true solution as it is merely an average of other solutions and has many coupled sources of variation, 3) outlier solutions should be investigated and understood, and 4) the DPW series does not have the systematic build up and definition on both the computational and experimental side that is required for detailed verification and validation. Several observations regarding the importance of the grid, effects of physical modeling, benefits of open forums, and guidance for validation experiments are discussed. The increased variation in results when predicting regions of flow separation and increased variation due to interaction effects, e.g., fuselage and horizontal tail, point out the need for validation data sets for these important flow phenomena. Experiences with a recent validation experiment at NASA Langley are included to provide guidance on validation experiments.
Chellali, A.; Ahn, W.; Sankaranarayanan, G.; Flinn, J. T.; Schwaitzberg, S. D.; Jones, D.B.; De, Suvranu; Cao, C.G.L.
2014-01-01
Introduction The Fundamentals of Laparoscopic Surgery (FLS) trainer is currently the standard for training and evaluating basic laparoscopic skills. However, its manual scoring system is time-consuming and subjective. The Virtual Basic Laparoscopic Skill Trainer (VBLaST©) is the virtual version of the FLS trainer which allows automatic and real time assessment of skill performance, as well as force feedback. In this study, the VBLaST© pattern cutting (VBLaST-PC©) and ligating loop (VBLaST-LL©) tasks were evaluated as part of a validation study. We hypothesized that performance would be similar on the FLS and VBLaST© trainers, and that subjects with more experience would perform better than those with less experience on both trainers. Methods Fifty-five subjects with varying surgical experience were recruited at the Learning Center during the 2013 SAGES annual meeting and were divided into two groups: experts (PGY 5, surgical fellows and surgical attendings) and novices (PGY 1–4). They were asked to perform the pattern cutting or the ligating loop task on the FLS and the VBLaST© trainers. Their performance scores for each trainer were calculated and compared. Results There were no significant differences between the FLS and VBLaST© scores for either the pattern cutting or the ligating loop task. Experts’ scores were significantly higher than the scores for novices on both trainers. Conclusion This study showed that the subjects’ performance on the VBLaST© trainer was similar to the FLS performance for both tasks. Both the VBLaST-PC© and the VBLaST-LL© tasks permitted discrimination between the novice and expert groups. Though concurrent and discriminant validity has been established, further studies to establish convergent and predictive validity are needed. Once validated as a training system for laparoscopic skills, the system is expected to overcome the current limitations of the FLS trainer. PMID:25159626
A Possible Tool for Checking Errors in the INAA Results, Based on Neutron Data and Method Validation
NASA Astrophysics Data System (ADS)
Cincu, Em.; Grigore, Ioana Manea; Barbos, D.; Cazan, I. L.; Manu, V.
2008-08-01
This work presents preliminary results of a new type of possible application in the INAA experiments of elemental analysis, useful to check errors occurred during investigation of unknown samples; it relies on the INAA method validation experiments and accuracy of the neutron data from the literature. The paper comprises 2 sections, the first one presents—in short—the steps of the experimental tests carried out for INAA method validation and for establishing the `ACTIVA-N' laboratory performance, which is-at the same time-an illustration of the laboratory evolution on the way to get performance. Section 2 presents our recent INAA results on CRMs, of which interpretation opens discussions about the usefulness of using a tool for checking possible errors, different from the usual statistical procedures. The questionable aspects and the requirements to develop a practical checking tool are discussed.
Validation of a dye stain assay for vaginally inserted HEC-filled microbicide applicators
Katzen, Lauren L.; Fernández-Romero, José A.; Sarna, Avina; Murugavel, Kailapuri G.; Gawarecki, Daniel; Zydowsky, Thomas M.; Mensch, Barbara S.
2011-01-01
Background The reliability and validity of self-reports of vaginal microbicide use are questionable given the explicit understanding that participants are expected to comply with study protocols. Our objective was to optimize the Population Council's previously validated dye stain assay (DSA) and related procedures, and establish predictive values for the DSA's ability to identify vaginally inserted single-use, low-density polyethylene microbicide applicators filled with hydroxyethylcellulose gel. Methods Applicators, inserted by 252 female sex workers enrolled in a microbicide feasibility study in Southern India, served as positive controls for optimization and validation experiments. Prior to validation, optimal dye concentration and staining time were ascertained. Three validation experiments were conducted to determine sensitivity, specificity, negative predictive values and positive predictive values. Results The dye concentration of 0.05% (w/v) FD&C Blue No. 1 Granular Food Dye and staining time of five seconds were determined to be optimal and were used for the three validation experiments. There were a total of 1,848 possible applicator readings across validation experiments; 1,703 (92.2%) applicator readings were correct. On average, the DSA performed with 90.6% sensitivity, 93.9% specificity, and had a negative predictive value of 93.8% and a positive predictive value of 91.0%. No statistically significant differences between experiments were noted. Conclusions The DSA was optimized and successfully validated for use with single-use, low-density polyethylene applicators filled with hydroxyethylcellulose (HEC) gel. We recommend including the DSA in future microbicide trials involving vaginal gels in order to identify participants who have low adherence to dosing regimens. In doing so, we can develop strategies to improve adherence as well as investigate the association between product use and efficacy. PMID:21992983
Graafland, Maurits; Bok, Kiki; Schreuder, Henk W R; Schijven, Marlies P
2014-06-01
Untrained laparoscopic camera assistants in minimally invasive surgery (MIS) may cause suboptimal view of the operating field, thereby increasing risk for errors. Camera navigation is often performed by the least experienced member of the operating team, such as inexperienced surgical residents, operating room nurses, and medical students. The operating room nurses and medical students are currently not included as key user groups in structured laparoscopic training programs. A new virtual reality laparoscopic camera navigation (LCN) module was specifically developed for these key user groups. This multicenter prospective cohort study assesses face validity and construct validity of the LCN module on the Simendo virtual reality simulator. Face validity was assessed through a questionnaire on resemblance to reality and perceived usability of the instrument among experts and trainees. Construct validity was assessed by comparing scores of groups with different levels of experience on outcome parameters of speed and movement proficiency. The results obtained show uniform and positive evaluation of the LCN module among expert users and trainees, signifying face validity. Experts and intermediate experience groups performed significantly better in task time and camera stability during three repetitions, compared to the less experienced user groups (P < .007). Comparison of learning curves showed significant improvement of proficiency in time and camera stability for all groups during three repetitions (P < .007). The results of this study show face validity and construct validity of the LCN module. The module is suitable for use in training curricula for operating room nurses and novice surgical trainees, aimed at improving team performance in minimally invasive surgery. © The Author(s) 2013.
Construct validity of the ovine model in endoscopic sinus surgery training.
Awad, Zaid; Taghi, Ali; Sethukumar, Priya; Tolley, Neil S
2015-03-01
To demonstrate construct validity of the ovine model as a tool for training in endoscopic sinus surgery (ESS). Prospective, cross-sectional evaluation study. Over 18 consecutive months, trainees and experts were evaluated in their ability to perform a range of tasks (based on previous face validation and descriptive studies conducted by the same group) relating to ESS on the sheep-head model. Anonymized randomized video recordings of the above were assessed by two independent and blinded assessors. A validated assessment tool utilizing a five-point Likert scale was employed. Construct validity was calculated by comparing scores across training levels and experts using mean and interquartile range of global and task-specific scores. Subgroup analysis of the intermediate group ascertained previous experience. Nonparametric descriptive statistics were used, and analysis was carried out using SPSS version 21 (IBM, Armonk, NY). Reliability of the assessment tool was confirmed. The model discriminated well between different levels of expertise in global and task-specific scores. A positive correlation was noted between year in training and both global and task-specific scores (P < .001). Experience of the intermediate group was variable, and the number of ESS procedures performed under supervision had the highest impact on performance. This study describes an alternative model for ESS training and assessment. It is also the first to demonstrate construct validity of the sheep-head model for ESS training. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
DSMC Simulations of Hypersonic Flows and Comparison With Experiments
NASA Technical Reports Server (NTRS)
Moss, James N.; Bird, Graeme A.; Markelov, Gennady N.
2004-01-01
This paper presents computational results obtained with the direct simulation Monte Carlo (DSMC) method for several biconic test cases in which shock interactions and flow separation-reattachment are key features of the flow. Recent ground-based experiments have been performed for several biconic configurations, and surface heating rate and pressure measurements have been proposed for code validation studies. The present focus is to expand on the current validating activities for a relatively new DSMC code called DS2V that Bird (second author) has developed. Comparisons with experiments and other computations help clarify the agreement currently being achieved between computations and experiments and to identify the range of measurement variability of the proposed validation data when benchmarked with respect to the current computations. For the test cases with significant vibrational nonequilibrium, the effect of the vibrational energy surface accommodation on heating and other quantities is demonstrated.
Real-time remote scientific model validation
NASA Technical Reports Server (NTRS)
Frainier, Richard; Groleau, Nicolas
1994-01-01
This paper describes flight results from the use of a CLIPS-based validation facility to compare analyzed data from a space life sciences (SLS) experiment to an investigator's preflight model. The comparison, performed in real-time, either confirms or refutes the model and its predictions. This result then becomes the basis for continuing or modifying the investigator's experiment protocol. Typically, neither the astronaut crew in Spacelab nor the ground-based investigator team are able to react to their experiment data in real time. This facility, part of a larger science advisor system called Principal Investigator in a Box, was flown on the space shuttle in October, 1993. The software system aided the conduct of a human vestibular physiology experiment and was able to outperform humans in the tasks of data integrity assurance, data analysis, and scientific model validation. Of twelve preflight hypotheses associated with investigator's model, seven were confirmed and five were rejected or compromised.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchat, Thomas K.; Jernigan, Dann A.
A set of experiments and test data are outlined in this report that provides radiation intensity data for the validation of models for the radiative transfer equation. The experiments were performed with lightly-sooting liquid hydrocarbon fuels that yielded fully turbulent fires 2 m diameter). In addition, supplemental measurements of air flow and temperature, fuel temperature and burn rate, and flame surface emissive power, wall heat, and flame height and width provide a complete set of boundary condition data needed for validation of models used in fire simulations.
Reconceptualising the external validity of discrete choice experiments.
Lancsar, Emily; Swait, Joffre
2014-10-01
External validity is a crucial but under-researched topic when considering using discrete choice experiment (DCE) results to inform decision making in clinical, commercial or policy contexts. We present the theory and tests traditionally used to explore external validity that focus on a comparison of final outcomes and review how this traditional definition has been empirically tested in health economics and other sectors (such as transport, environment and marketing) in which DCE methods are applied. While an important component, we argue that the investigation of external validity should be much broader than a comparison of final outcomes. In doing so, we introduce a new and more comprehensive conceptualisation of external validity, closely linked to process validity, that moves us from the simple characterisation of a model as being or not being externally valid on the basis of predictive performance, to the concept that external validity should be an objective pursued from the initial conceptualisation and design of any DCE. We discuss how such a broader definition of external validity can be fruitfully used and suggest innovative ways in which it can be explored in practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oldenburg, C.M.
2011-06-01
The need for risk-driven field experiments for CO{sub 2} geologic storage processes to complement ongoing pilot-scale demonstrations is discussed. These risk-driven field experiments would be aimed at understanding the circumstances under which things can go wrong with a CO{sub 2} capture and storage (CCS) project and cause it to fail, as distinguished from accomplishing this end using demonstration and industrial scale sites. Such risk-driven tests would complement risk-assessment efforts that have already been carried out by providing opportunities to validate risk models. In addition to experimenting with high-risk scenarios, these controlled field experiments could help validate monitoring approaches to improvemore » performance assessment and guide development of mitigation strategies.« less
Felipe-Sesé, Luis; López-Alba, Elías; Hannemann, Benedikt; Schmeer, Sebastian; Diaz, Francisco A
2017-06-28
A quasistatic indentation numerical analysis in a round section specimen made of soft material has been performed and validated with a full field experimental technique, i.e., Digital Image Correlation 3D. The contact experiment specifically consisted of loading a 25 mm diameter rubber cylinder of up to a 5 mm indentation and then unloading. Experimental strains fields measured at the surface of the specimen during the experiment were compared with those obtained by performing two numerical analyses employing two different hyperplastic material models. The comparison was performed using an Image Decomposition new methodology that makes a direct comparison of full-field data independently of their scale or orientation possible. Numerical results show a good level of agreement with those measured during the experiments. However, since image decomposition allows for the differences to be quantified, it was observed that one of the adopted material models reproduces lower differences compared to experimental results.
Felipe-Sesé, Luis; López-Alba, Elías; Hannemann, Benedikt; Schmeer, Sebastian; Diaz, Francisco A.
2017-01-01
A quasistatic indentation numerical analysis in a round section specimen made of soft material has been performed and validated with a full field experimental technique, i.e., Digital Image Correlation 3D. The contact experiment specifically consisted of loading a 25 mm diameter rubber cylinder of up to a 5 mm indentation and then unloading. Experimental strains fields measured at the surface of the specimen during the experiment were compared with those obtained by performing two numerical analyses employing two different hyperplastic material models. The comparison was performed using an Image Decomposition new methodology that makes a direct comparison of full-field data independently of their scale or orientation possible. Numerical results show a good level of agreement with those measured during the experiments. However, since image decomposition allows for the differences to be quantified, it was observed that one of the adopted material models reproduces lower differences compared to experimental results. PMID:28773081
Improvements in the simulation code of the SOX experiment
NASA Astrophysics Data System (ADS)
Caminata, A.; Agostini, M.; Altenmüeller, K.; Appel, S.; Atroshchenko, V.; Bellini, G.; Benziger, J.; Bick, D.; Bonfini, G.; Bravo, D.; Caccianiga, B.; Calaprice, F.; Carlini, M.; Cavalcante, P.; Chepurnov, A.; Choi, K.; Cribier, M.; D'Angelo, D.; Davini, S.; Derbin, A.; Di Noto, L.; Drachnev, I.; Durero, M.; Etenko, A.; Farinon, S.; Fischer, V.; Fomenko, K.; Franco, D.; Gabriele, F.; Gaffiot, J.; Galbiati, C.; Gschwender, M.; Ghiano, C.; Giammarchi, M.; Goeger-Neff, M.; Goretti, A.; Gromov, M.; Hagner, C.; Houdy, Th.; Hungerford, E.; Ianni, Aldo; Ianni, Andrea; Jonquères, N.; Jany, A.; Jedrzejczak, K.; Jeschke, D.; Kobychev, V.; Korablev, D.; Korga, G.; Kornoukhov, V.; Kryn, D.; Lachenmaier, T.; Lasserre, T.; Laubenstein, M.; Lehnert, B.; Link, J.; Litvinovich, E.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Machulin, I.; Manecki, S.; Maneschg, W.; Manuzio, G.; Marcocci, S.; Maricic, J.; Mention, G.; Meroni, E.; Meyer, M.; Miramonti, L.; Misiaszek, M.; Montuschi, M.; Mosteiro, P.; Muratova, V.; Musenich, R.; Neumair, B.; Oberauer, L.; Obolensky, M.; Ortica, F.; Pallavicini, M.; Papp, L.; Pocar, A.; Ranucci, G.; Razeto, A.; Re, A.; Romani, A.; Roncin, R.; Rossi, N.; Schönert, S.; Scola, L.; Semenov, D.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Sukhotin, S.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Thurn, J.; Toropova, M.; Unzhakov, E.; Veyssiére, C.; Vishneva, A.; Vivier, M.; Vogelaar, R. B.; von Feilitzsch, F.; Wang, H.; Weinz, S.; Winter, J.; Wojcik, M.; Wurm, M.; Yokley, Z.; Zaimidoroga, O.; Zavatarelli, S.; Zuber, K.; Zuzel, G.
2017-09-01
The aim of the SOX experiment is to test the hypothesis of existence of light sterile neutrinos trough a short baseline experiment. Electron antineutrinos will be produced by an high activity source and detected in the Borexino experiment. Both an oscillometry approach and a conventional disappearance analysis will be performed and, if combined, SOX will be able to investigate most of the anomaly region at 95% c.l. This paper focuses on the improvements performed on the simulation code and on the techniques (calibrations) used to validate the results.
U.S. perspective on technology demonstration experiments for adaptive structures
NASA Technical Reports Server (NTRS)
Aswani, Mohan; Wada, Ben K.; Garba, John A.
1991-01-01
Evaluation of design concepts for adaptive structures is being performed in support of several focused research programs. These include programs such as Precision Segmented Reflector (PSR), Control Structure Interaction (CSI), and the Advanced Space Structures Technology Research Experiment (ASTREX). Although not specifically designed for adaptive structure technology validation, relevant experiments can be performed using the Passive and Active Control of Space Structures (PACOSS) testbed, the Space Integrated Controls Experiment (SPICE), the CSI Evolutionary Model (CEM), and the Dynamic Scale Model Test (DSMT) Hybrid Scale. In addition to the ground test experiments, several space flight experiments have been planned, including a reduced gravity experiment aboard the KC-135 aircraft, shuttle middeck experiments, and the Inexpensive Flight Experiment (INFLEX).
Irsik, Vanessa C; Vanden Bosch der Nederlanden, Christina M; Snyder, Joel S
2016-11-01
Attention and other processing constraints limit the perception of objects in complex scenes, which has been studied extensively in the visual sense. We used a change deafness paradigm to examine how attention to particular objects helps and hurts the ability to notice changes within complex auditory scenes. In a counterbalanced design, we examined how cueing attention to particular objects affected performance in an auditory change-detection task through the use of valid or invalid cues and trials without cues (Experiment 1). We further examined how successful encoding predicted change-detection performance using an object-encoding task and we addressed whether performing the object-encoding task along with the change-detection task affected performance overall (Experiment 2). Participants had more error for invalid compared to valid and uncued trials, but this effect was reduced in Experiment 2 compared to Experiment 1. When the object-encoding task was present, listeners who completed the uncued condition first had less overall error than those who completed the cued condition first. All participants showed less change deafness when they successfully encoded change-relevant compared to irrelevant objects during valid and uncued trials. However, only participants who completed the uncued condition first also showed this effect during invalid cue trials, suggesting a broader scope of attention. These findings provide converging evidence that attention to change-relevant objects is crucial for successful detection of acoustic changes and that encouraging broad attention to multiple objects is the best way to reduce change deafness. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Focks, Andreas; Belgers, Dick; Boerwinkel, Marie-Claire; Buijse, Laura; Roessink, Ivo; Van den Brink, Paul J
2018-05-01
Exposure patterns in ecotoxicological experiments often do not match the exposure profiles for which a risk assessment needs to be performed. This limitation can be overcome by using toxicokinetic-toxicodynamic (TKTD) models for the prediction of effects under time-variable exposure. For the use of TKTD models in the environmental risk assessment of chemicals, it is required to calibrate and validate the model for specific compound-species combinations. In this study, the survival of macroinvertebrates after exposure to the neonicotinoid insecticide was modelled using TKTD models from the General Unified Threshold models of Survival (GUTS) framework. The models were calibrated on existing survival data from acute or chronic tests under static exposure regime. Validation experiments were performed for two sets of species-compound combinations: one set focussed on multiple species sensitivity to a single compound: imidacloprid, and the other set on the effects of multiple compounds for a single species, i.e., the three neonicotinoid compounds imidacloprid, thiacloprid and thiamethoxam, on the survival of the mayfly Cloeon dipterum. The calibrated models were used to predict survival over time, including uncertainty ranges, for the different time-variable exposure profiles used in the validation experiments. From the comparison between observed and predicted survival, it appeared that the accuracy of the model predictions was acceptable for four of five tested species in the multiple species data set. For compounds such as neonicotinoids, which are known to have the potential to show increased toxicity under prolonged exposure, the calibration and validation of TKTD models for survival needs to be performed ideally by considering calibration data from both acute and chronic tests.
Munkácsy, Gyöngyi; Sztupinszki, Zsófia; Herman, Péter; Bán, Bence; Pénzváltó, Zsófia; Szarvas, Nóra; Győrffy, Balázs
2016-09-27
No independent cross-validation of success rate for studies utilizing small interfering RNA (siRNA) for gene silencing has been completed before. To assess the influence of experimental parameters like cell line, transfection technique, validation method, and type of control, we have to validate these in a large set of studies. We utilized gene chip data published for siRNA experiments to assess success rate and to compare methods used in these experiments. We searched NCBI GEO for samples with whole transcriptome analysis before and after gene silencing and evaluated the efficiency for the target and off-target genes using the array-based expression data. Wilcoxon signed-rank test was used to assess silencing efficacy and Kruskal-Wallis tests and Spearman rank correlation were used to evaluate study parameters. All together 1,643 samples representing 429 experiments published in 207 studies were evaluated. The fold change (FC) of down-regulation of the target gene was above 0.7 in 18.5% and was above 0.5 in 38.7% of experiments. Silencing efficiency was lowest in MCF7 and highest in SW480 cells (FC = 0.59 and FC = 0.30, respectively, P = 9.3E-06). Studies utilizing Western blot for validation performed better than those with quantitative polymerase chain reaction (qPCR) or microarray (FC = 0.43, FC = 0.47, and FC = 0.55, respectively, P = 2.8E-04). There was no correlation between type of control, transfection method, publication year, and silencing efficiency. Although gene silencing is a robust feature successfully cross-validated in the majority of experiments, efficiency remained insufficient in a significant proportion of studies. Selection of cell line model and validation method had the highest influence on silencing proficiency.
Xu, Song; Perez, Manuela; Perrenot, Cyril; Hubert, Nicolas; Hubert, Jacques
2016-08-01
To determine the face, content, construct, and concurrent validity of the Xperience™ Team Trainer (XTT) as an assessment tool of robotic surgical bed-assistance skills. Subjects were recruited during a robotic surgery curriculum. They were divided into three groups: the group RA with robotic bed-assistance experience, the group LS with laparoscopic surgical experience, and the control group without bed-assistance or laparoscopic experience. The subjects first performed two standard FLS exercises on a laparoscopic simulator for the assessment of basic laparoscopic skills. After that, they performed three virtual reality exercises on XTT, and then performed similar exercises on physical models on a da Vinci(®) box trainer. Twenty-eight persons volunteered for and completed the tasks. Most expert subjects agreed on the realism of XTT and the three exercises, and also their interest for teamwork and bed-assistant training. The group RA and the group LS demonstrated a similar level of basic laparoscopic skills. Both groups performed better than the control group on the XTT exercises (p < 0.05). The performance superiority of the group RA over LS was observed but not statistically significant. Correlation of performance was determined between the tests on XTT and on da Vinci(®) box trainer. The introduction of XTT facilitates the training of bedside assistants and emphasizes the importance of teamwork, which may change the paradigm of robotic surgery training in the near future. As an assessment tool of bed-assistance skills, XTT proves face, content, and concurrent validity. However, these results should be qualified considering the potential limitations of this exploratory study with a relatively small sample size. The training modules remain to be developed, and more complex and discriminative exercises are expected. Other studies will be needed to further determine construct validity in the future.
Development and Validation of a Mathematics Anxiety Scale for Students
ERIC Educational Resources Information Center
Ko, Ho Kyoung; Yi, Hyun Sook
2011-01-01
This study developed and validated a Mathematics Anxiety Scale for Students (MASS) that can be used to measure the level of mathematics anxiety that students experience in school settings and help them overcome anxiety and perform better in mathematics achievement. We conducted a series of preliminary analyses and panel reviews to evaluate quality…
ATS-6 engineering performance report. Volume 2: Orbit and attitude controls
NASA Technical Reports Server (NTRS)
Wales, R. O. (Editor)
1981-01-01
Attitude control is reviewed, encompassing the attitude control subsystem, spacecraft attitude precision pointing and slewing adaptive control experiment, and RF interferometer experiment. The spacecraft propulsion system (SPS) is discussed, including subsystem, SPS design description and validation, orbital operations and performance, in-orbit anomalies and contingency operations, and the cesium bombardment ion engine experiment. Thruster failure due to plugging of the propellant feed passages, a major cause for mission termination, are considered among the critical generic failures on the satellite.
Ouyang, Liwen; Apley, Daniel W; Mehrotra, Sanjay
2016-04-01
Electronic medical record (EMR) databases offer significant potential for developing clinical hypotheses and identifying disease risk associations by fitting statistical models that capture the relationship between a binary response variable and a set of predictor variables that represent clinical, phenotypical, and demographic data for the patient. However, EMR response data may be error prone for a variety of reasons. Performing a manual chart review to validate data accuracy is time consuming, which limits the number of chart reviews in a large database. The authors' objective is to develop a new design-of-experiments-based systematic chart validation and review (DSCVR) approach that is more powerful than the random validation sampling used in existing approaches. The DSCVR approach judiciously and efficiently selects the cases to validate (i.e., validate whether the response values are correct for those cases) for maximum information content, based only on their predictor variable values. The final predictive model will be fit using only the validation sample, ignoring the remainder of the unvalidated and unreliable error-prone data. A Fisher information based D-optimality criterion is used, and an algorithm for optimizing it is developed. The authors' method is tested in a simulation comparison that is based on a sudden cardiac arrest case study with 23 041 patients' records. This DSCVR approach, using the Fisher information based D-optimality criterion, results in a fitted model with much better predictive performance, as measured by the receiver operating characteristic curve and the accuracy in predicting whether a patient will experience the event, than a model fitted using a random validation sample. The simulation comparisons demonstrate that this DSCVR approach can produce predictive models that are significantly better than those produced from random validation sampling, especially when the event rate is low. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Oliva, Alexis; Monzón, Cecilia; Santoveña, Ana; Fariña, José B; Llabrés, Matías
2016-07-01
An ultra high performance liquid chromatography method was developed and validated for the quantitation of triamcinolone acetonide in an injectable ophthalmic hydrogel to determine the contribution of analytical method error in the content uniformity measurement. During the development phase, the design of experiments/design space strategy was used. For this, the free R-program was used as a commercial software alternative, a fast efficient tool for data analysis. The process capability index was used to find the permitted level of variation for each factor and to define the design space. All these aspects were analyzed and discussed under different experimental conditions by the Monte Carlo simulation method. Second, a pre-study validation procedure was performed in accordance with the International Conference on Harmonization guidelines. The validated method was applied for the determination of uniformity of dosage units and the reasons for variability (inhomogeneity and the analytical method error) were analyzed based on the overall uncertainty. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Klein, Jan; Teber, Dogu; Frede, Tom; Stock, Christian; Hruza, Marcel; Gözen, Ali; Seemann, Othmar; Schulze, Michael; Rassweiler, Jens
2013-03-01
Development and full validation of a laparoscopic training program for stepwise learning of a reproducible application of a standardized laparoscopic anastomosis technique and integration into the clinical course. The training of vesicourethral anastomosis (VUA) was divided into six simple standardized steps. To fix the objective criteria, four experienced surgeons performed the stepwise training protocol. Thirty-eight participants with no previous laparoscopic experience were investigated in their training performance. The times needed to manage each training step and the total training time were recorded. The integration into the clinical course was investigated. The training results and the corresponding steps during laparoscopic radical prostatectomy (LRP) were analyzed. Data analysis of corresponding operating room (OR) sections of 793 LRP was performed. Based on the validity, criteria were determined. In the laboratory section, a significant reduction of OR time for every step was seen in all participants. Coordination: 62%; longitudinal incision: 52%; inverted U-shape incision: 43%; plexus: 47%. Anastomosis catheter model: 38%. VUA: 38%. The laboratory section required a total time of 29 hours (minimum: 16 hours; maximum: 42 hours). All participants had shorter execution times in the laboratory than under real conditions. The best match was found within the VUA model. To perform an anastomosis under real conditions, 25% more time was needed. By using the training protocol, the performance of the VUA is comparable to that of an surgeon with experience of about 50 laparoscopic VUA. Data analysis proved content, construct, and prognostic validity. The use of stepwise training approaches enables a surgeon to learn and reproduce complex reconstructive surgical tasks: eg, the VUA in a safe environment. The validity of the designed system is given at all levels and should be used as a standard in the clinical surgical training in laparoscopic reconstructive urology.
Fransson, Boel A; Chen, Chi-Ya; Noyes, Julie A; Ragle, Claude A
2016-11-01
To determine the construct and concurrent validity of instrument motion metrics for laparoscopic skills assessment in virtual reality and augmented reality simulators. Evaluation study. Veterinarian students (novice, n = 14) and veterinarians (experienced, n = 11) with no or variable laparoscopic experience. Participants' minimally invasive surgery (MIS) experience was determined by hospital records of MIS procedures performed in the Teaching Hospital. Basic laparoscopic skills were assessed by 5 tasks using a physical box trainer. Each participant completed 2 tasks for assessments in each type of simulator (virtual reality: bowel handling and cutting; augmented reality: object positioning and a pericardial window model). Motion metrics such as instrument path length, angle or drift, and economy of motion of each simulator were recorded. None of the motion metrics in a virtual reality simulator showed correlation with experience, or to the basic laparoscopic skills score. All metrics in augmented reality were significantly correlated with experience (time, instrument path, and economy of movement), except for the hand dominance metric. The basic laparoscopic skills score was correlated to all performance metrics in augmented reality. The augmented reality motion metrics differed between American College of Veterinary Surgeons diplomates and residents, whereas basic laparoscopic skills score and virtual reality metrics did not. Our results provide construct validity and concurrent validity for motion analysis metrics for an augmented reality system, whereas a virtual reality system was validated only for the time score. © Copyright 2016 by The American College of Veterinary Surgeons.
In-Flight Thermal Performance of the Lidar In-Space Technology Experiment
NASA Technical Reports Server (NTRS)
Roettker, William
1995-01-01
The Lidar In-Space Technology Experiment (LITE) was developed at NASA s Langley Research Center to explore the applications of lidar operated from an orbital platform. As a technology demonstration experiment, LITE was developed to gain experience designing and building future operational orbiting lidar systems. Since LITE was the first lidar system to be flown in space, an important objective was to validate instrument design principles in such areas as thermal control, laser performance, instrument alignment and control, and autonomous operations. Thermal and structural analysis models of the instrument were developed during the design process to predict the behavior of the instrument during its mission. In order to validate those mathematical models, extensive engineering data was recorded during all phases of LITE's mission. This inflight engineering data was compared with preflight predictions and, when required, adjustments to the thermal and structural models were made to more accurately match the instrument s actual behavior. The results of this process for the thermal analysis and design of LITE are presented in this paper.
Probing eukaryotic cell mechanics via mesoscopic simulations
NASA Astrophysics Data System (ADS)
Pivkin, Igor V.; Lykov, Kirill; Nematbakhsh, Yasaman; Shang, Menglin; Lim, Chwee Teck
2017-11-01
We developed a new mesoscopic particle based eukaryotic cell model which takes into account cell membrane, cytoskeleton and nucleus. The breast epithelial cells were used in our studies. To estimate the viscoelastic properties of cells and to calibrate the computational model, we performed micropipette aspiration experiments. The model was then validated using data from microfluidic experiments. Using the validated model, we probed contributions of sub-cellular components to whole cell mechanics in micropipette aspiration and microfluidics experiments. We believe that the new model will allow to study in silico numerous problems in the context of cell biomechanics in flows in complex domains, such as capillary networks and microfluidic devices.
Chang, Yuanhan; Tambe, Abhijit Anil; Maeda, Yoshinobu; Wada, Masahiro; Gonda, Tomoya
2018-03-08
A literature review of finite element analysis (FEA) studies of dental implants with their model validation process was performed to establish the criteria for evaluating validation methods with respect to their similarity to biological behavior. An electronic literature search of PubMed was conducted up to January 2017 using the Medical Subject Headings "dental implants" and "finite element analysis." After accessing the full texts, the context of each article was searched using the words "valid" and "validation" and articles in which these words appeared were read to determine whether they met the inclusion criteria for the review. Of 601 articles published from 1997 to 2016, 48 that met the eligibility criteria were selected. The articles were categorized according to their validation method as follows: in vivo experiments in humans (n = 1) and other animals (n = 3), model experiments (n = 32), others' clinical data and past literature (n = 9), and other software (n = 2). Validation techniques with a high level of sufficiency and efficiency are still rare in FEA studies of dental implants. High-level validation, especially using in vivo experiments tied to an accurate finite element method, needs to become an established part of FEA studies. The recognition of a validation process should be considered when judging the practicality of an FEA study.
Sharma, Ram C; Hara, Keitarou; Hirayama, Hidetake
2017-01-01
This paper presents the performance and evaluation of a number of machine learning classifiers for the discrimination between the vegetation physiognomic classes using the satellite based time-series of the surface reflectance data. Discrimination of six vegetation physiognomic classes, Evergreen Coniferous Forest, Evergreen Broadleaf Forest, Deciduous Coniferous Forest, Deciduous Broadleaf Forest, Shrubs, and Herbs, was dealt with in the research. Rich-feature data were prepared from time-series of the satellite data for the discrimination and cross-validation of the vegetation physiognomic types using machine learning approach. A set of machine learning experiments comprised of a number of supervised classifiers with different model parameters was conducted to assess how the discrimination of vegetation physiognomic classes varies with classifiers, input features, and ground truth data size. The performance of each experiment was evaluated by using the 10-fold cross-validation method. Experiment using the Random Forests classifier provided highest overall accuracy (0.81) and kappa coefficient (0.78). However, accuracy metrics did not vary much with experiments. Accuracy metrics were found to be very sensitive to input features and size of ground truth data. The results obtained in the research are expected to be useful for improving the vegetation physiognomic mapping in Japan.
Weinstock, Peter; Rehder, Roberta; Prabhu, Sanjay P; Forbes, Peter W; Roussin, Christopher J; Cohen, Alan R
2017-07-01
OBJECTIVE Recent advances in optics and miniaturization have enabled the development of a growing number of minimally invasive procedures, yet innovative training methods for the use of these techniques remain lacking. Conventional teaching models, including cadavers and physical trainers as well as virtual reality platforms, are often expensive and ineffective. Newly developed 3D printing technologies can recreate patient-specific anatomy, but the stiffness of the materials limits fidelity to real-life surgical situations. Hollywood special effects techniques can create ultrarealistic features, including lifelike tactile properties, to enhance accuracy and effectiveness of the surgical models. The authors created a highly realistic model of a pediatric patient with hydrocephalus via a unique combination of 3D printing and special effects techniques and validated the use of this model in training neurosurgery fellows and residents to perform endoscopic third ventriculostomy (ETV), an effective minimally invasive method increasingly used in treating hydrocephalus. METHODS A full-scale reproduction of the head of a 14-year-old adolescent patient with hydrocephalus, including external physical details and internal neuroanatomy, was developed via a unique collaboration of neurosurgeons, simulation engineers, and a group of special effects experts. The model contains "plug-and-play" replaceable components for repetitive practice. The appearance of the training model (face validity) and the reproducibility of the ETV training procedure (content validity) were assessed by neurosurgery fellows and residents of different experience levels based on a 14-item Likert-like questionnaire. The usefulness of the training model for evaluating the performance of the trainees at different levels of experience (construct validity) was measured by blinded observers using the Objective Structured Assessment of Technical Skills (OSATS) scale for the performance of ETV. RESULTS A combination of 3D printing technology and casting processes led to the creation of realistic surgical models that include high-fidelity reproductions of the anatomical features of hydrocephalus and allow for the performance of ETV for training purposes. The models reproduced the pulsations of the basilar artery, ventricles, and cerebrospinal fluid (CSF), thus simulating the experience of performing ETV on an actual patient. The results of the 14-item questionnaire showed limited variability among participants' scores, and the neurosurgery fellows and residents gave the models consistently high ratings for face and content validity. The mean score for the content validity questions (4.88) was higher than the mean score for face validity (4.69) (p = 0.03). On construct validity scores, the blinded observers rated performance of fellows significantly higher than that of residents, indicating that the model provided a means to distinguish between novice and expert surgical skills. CONCLUSIONS A plug-and-play lifelike ETV training model was developed through a combination of 3D printing and special effects techniques, providing both anatomical and haptic accuracy. Such simulators offer opportunities to accelerate the development of expertise with respect to new and novel procedures as well as iterate new surgical approaches and innovations, thus allowing novice neurosurgeons to gain valuable experience in surgical techniques without exposing patients to risk of harm.
Training Attentional Control Improves Cognitive and Motor Task Performance.
Ducrocq, Emmanuel; Wilson, Mark; Vine, Sam; Derakshan, Nazanin
2016-10-01
Attentional control is a necessary function for the regulation of goal-directed behavior. In three experiments we investigated whether training inhibitory control using a visual search task could improve task-specific measures of attentional control and performance. In Experiment 1 results revealed that training elicited a near-transfer effect, improving performance on a cognitive (antisaccade) task assessing inhibitory control. In Experiment 2 an initial far-transfer effect of training was observed on an index of attentional control validated for tennis. The principal aim of Experiment 3 was to expand on these findings by assessing objective gaze measures of inhibitory control during the performance of a tennis task. Training improved inhibitory control and performance when pressure was elevated, confirming the mechanisms by which cognitive anxiety impacts performance. These results suggest that attentional control training can improve inhibition and reduce taskspecific distractibility with promise of transfer to more efficient sporting performance in competitive contexts.
Measurement uncertainty analysis techniques applied to PV performance measurements
NASA Astrophysics Data System (ADS)
Wells, C.
1992-10-01
The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.
The Objectives of NASA's Living with a Star Space Environment Testbed
NASA Technical Reports Server (NTRS)
Barth, Janet L.; LaBel, Kenneth A.; Brewer, Dana; Kauffman, Billy; Howard, Regan; Griffin, Geoff; Day, John H. (Technical Monitor)
2001-01-01
NASA is planning to fly a series of Space Environment Testbeds (SET) as part of the Living With A Star (LWS) Program. The goal of the testbeds is to improve and develop capabilities to mitigate and/or accommodate the affects of solar variability in spacecraft and avionics design and operation. This will be accomplished by performing technology validation in space to enable routine operations, characterize technology performance in space, and improve and develop models, guidelines and databases. The anticipated result of the LWS/SET program is improved spacecraft performance, design, and operation for survival of the radiation, spacecraft charging, meteoroid, orbital debris and thermosphere/ionosphere environments. The program calls for a series of NASA Research Announcements (NRAs) to be issued to solicit flight validation experiments, improvement in environment effects models and guidelines, and collateral environment measurements. The selected flight experiments may fly on the SET experiment carriers and flights of opportunity on other commercial and technology missions. This paper presents the status of the project so far, including a description of the types of experiments that are intended to fly on SET-1 and a description of the SET-1 carrier parameters.
Results From Phase-1 and Phase-2 GOLD Experiments
NASA Technical Reports Server (NTRS)
Wilson, K.; Jeganathan, M.; Lesh, J. R.; James, J.; Xu, G.
1997-01-01
The Ground/Orbiter Lasercomm Demonstration conducted between the Japanese Engineering Test Satellite (ETS-VI) and the ground station at JPL's Table Mountain Facility, Wrightwood, California, was the rst ground-to-space two-way optical communications experiment. Experiment objectives included validating the performance predictions of the optical link. Atmospheric attenuation and seeing measurements were made during the experiment, and data were analyzed. Downlink telemetry data recovered over the course of the experiment provided information on in-orbit performance of the ETS-VI's laser communications equipment. Biterror rates as low as 10 4 were measured on the uplink and 10 5 on the downlink. Measured signal powers agreed well with theoretical predictions.
Validation of Helicopter Gear Condition Indicators Using Seeded Fault Tests
NASA Technical Reports Server (NTRS)
Dempsey, Paula; Brandon, E. Bruce
2013-01-01
A "seeded fault test" in support of a rotorcraft condition based maintenance program (CBM), is an experiment in which a component is tested with a known fault while health monitoring data is collected. These tests are performed at operating conditions comparable to operating conditions the component would be exposed to while installed on the aircraft. Performance of seeded fault tests is one method used to provide evidence that a Health Usage Monitoring System (HUMS) can replace current maintenance practices required for aircraft airworthiness. Actual in-service experience of the HUMS detecting a component fault is another validation method. This paper will discuss a hybrid validation approach that combines in service-data with seeded fault tests. For this approach, existing in-service HUMS flight data from a naturally occurring component fault will be used to define a component seeded fault test. An example, using spiral bevel gears as the targeted component, will be presented. Since the U.S. Army has begun to develop standards for using seeded fault tests for HUMS validation, the hybrid approach will be mapped to the steps defined within their Aeronautical Design Standard Handbook for CBM. This paper will step through their defined processes, and identify additional steps that may be required when using component test rig fault tests to demonstrate helicopter CI performance. The discussion within this paper will provide the reader with a better appreciation for the challenges faced when defining a seeded fault test for HUMS validation.
Anderson, P. S. L.; Rayfield, E. J.
2012-01-01
Computational models such as finite-element analysis offer biologists a means of exploring the structural mechanics of biological systems that cannot be directly observed. Validated against experimental data, a model can be manipulated to perform virtual experiments, testing variables that are hard to control in physical experiments. The relationship between tooth form and the ability to break down prey is key to understanding the evolution of dentition. Recent experimental work has quantified how tooth shape promotes fracture in biological materials. We present a validated finite-element model derived from physical compression experiments. The model shows close agreement with strain patterns observed in photoelastic test materials and reaction forces measured during these experiments. We use the model to measure strain energy within the test material when different tooth shapes are used. Results show that notched blades deform materials for less strain energy cost than straight blades, giving insights into the energetic relationship between tooth form and prey materials. We identify a hypothetical ‘optimal’ blade angle that minimizes strain energy costs and test alternative prey materials via virtual experiments. Using experimental data and computational models offers an integrative approach to understand the mechanics of tooth morphology. PMID:22399789
Briley, Daniel A.; Domiteaux, Matthew; Tucker-Drob, Elliot M.
2014-01-01
Many achievement-relevant personality measures (APMs) have been developed, but the interrelations among APMs or associations with the broader personality landscape are not well-known. In Study 1, 214 participants were measured on 36 APMs and a measure of the Big Five. Factor analytic results supported the convergent and discriminant validity of five latent dimensions: performance, mastery, self-doubt, effort, and intellectual investment. Conscientiousness, neuroticism, and openness to experience had the most consistent associations with APMs. We constructed a more efficient scale– the Multidimensional Achievement-Relevant Personality Scale (MAPS). In Study 2, we replicated the factor structure and external correlates of the MAPS in a sample of 359 individuals. Finally, we validated the MAPS with four indicators of academic performance and demonstrated incremental validity. PMID:24839374
Construct validity of the LapVR virtual-reality surgical simulator.
Iwata, Naoki; Fujiwara, Michitaka; Kodera, Yasuhiro; Tanaka, Chie; Ohashi, Norifumi; Nakayama, Goro; Koike, Masahiko; Nakao, Akimasa
2011-02-01
Laparoscopic surgery requires fundamental skills peculiar to endoscopic procedures such as eye-hand coordination. Acquisition of such skills prior to performing actual surgery is highly desirable for favorable outcome. Virtual-reality simulators have been developed for both surgical training and assessment of performance. The aim of the current study is to show construct validity of a novel simulator, LapVR (Immersion Medical, San Jose, CA, USA), for Japanese surgeons and surgical residents. Forty-four subjects were divided into the following three groups according to their experience in laparoscopic surgery: 14 residents (RE) with no experience in laparoscopic surgery, 14 junior surgeons (JR) with little experience, and 16 experienced surgeons (EX). All subjects executed "essential task 1" programmed in the LapVR, which consists of six tasks, resulting in automatic measurement of 100 parameters indicating various aspects of laparoscopic skills. Time required for each task tended to be inversely correlated with experience in laparoscopic surgery. For the peg transfer skill, statistically significant differences were observed between EX and RE in three parameters, including total time and average time taken to complete the procedure and path length for the nondominant hand. For the cutting skill, similar differences were observed between EX and RE in total time, number of unsuccessful cutting attempts, and path length for the nondominant hand. According to the programmed comprehensive evaluation, performance in terms of successful completion of the task and actual experience of the participants in laparoscopic surgery correlated significantly for the peg transfer (P=0.007) and cutting skills (P=0.026). The peg transfer and cutting skills could best distinguish between EX and RE. This study is the first to provide evidence that LapVR has construct validity to discriminate between novice and experienced laparoscopic surgeons.
Moving to Capture Children's Attention: Developing a Methodology for Measuring Visuomotor Attention.
Hill, Liam J B; Coats, Rachel O; Mushtaq, Faisal; Williams, Justin H G; Aucott, Lorna S; Mon-Williams, Mark
2016-01-01
Attention underpins many activities integral to a child's development. However, methodological limitations currently make large-scale assessment of children's attentional skill impractical, costly and lacking in ecological validity. Consequently we developed a measure of 'Visual Motor Attention' (VMA)-a construct defined as the ability to sustain and adapt visuomotor behaviour in response to task-relevant visual information. In a series of experiments, we evaluated the capability of our method to measure attentional processes and their contributions in guiding visuomotor behaviour. Experiment 1 established the method's core features (ability to track stimuli moving on a tablet-computer screen with a hand-held stylus) and demonstrated its sensitivity to principled manipulations in adults' attentional load. Experiment 2 standardised a format suitable for use with children and showed construct validity by capturing developmental changes in executive attention processes. Experiment 3 tested the hypothesis that children with and without coordination difficulties would show qualitatively different response patterns, finding an interaction between the cognitive and motor factors underpinning responses. Experiment 4 identified associations between VMA performance and existing standardised attention assessments and thereby confirmed convergent validity. These results establish a novel approach to measuring childhood attention that can produce meaningful functional assessments that capture how attention operates in an ecologically valid context (i.e. attention's specific contribution to visuomanual action).
Disruption Tolerant Networking Flight Validation Experiment on NASA's EPOXI Mission
NASA Technical Reports Server (NTRS)
Wyatt, Jay; Burleigh, Scott; Jones, Ross; Torgerson, Leigh; Wissler, Steve
2009-01-01
In October and November of 2008, the Jet Propulsion Laboratory installed and tested essential elements of Delay/Disruption Tolerant Networking (DTN) technology on the Deep Impact spacecraft. This experiment, called Deep Impact Network Experiment (DINET), was performed in close cooperation with the EPOXI project which has responsibility for the spacecraft. During DINET some 300 images were transmitted from the JPL nodes to the spacecraft. Then they were automatically forwarded from the spacecraft back to the JPL nodes, exercising DTN's bundle origination, transmission, acquisition, dynamic route computation, congestion control, prioritization, custody transfer, and automatic retransmission procedures, both on the spacecraft and on the ground, over a period of 27 days. All transmitted bundles were successfully received, without corruption. The DINET experiment demonstrated DTN readiness for operational use in space missions. This activity was part of a larger NASA space DTN development program to mature DTN to flight readiness for a wide variety of mission types by the end of 2011. This paper describes the DTN protocols, the flight demo implementation, validation metrics which were created for the experiment, and validation results.
Predeployment validation of fault-tolerant systems through software-implemented fault insertion
NASA Technical Reports Server (NTRS)
Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.
1989-01-01
Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.
Panamanian women׳s experience of vaginal examination in labour: A questionnaire validation.
Bonilla-Escobar, Francisco J; Ortega-Lenis, Delia; Rojas-Mirquez, Johanna C; Ortega-Loubon, Christian
2016-05-01
to validate a tool that allows healthcare providers to obtain accurate information regarding Panamanian women׳s thoughts and feelings about vaginal examination during labour that can be used in other Latin-American countries. validation study based on a database from a cross-sectional study carried out in two tertiary care hospitals in Panama City, Panama. Women in the immediate postpartum period who had spontaneous labour onset and uncomplicated deliveries were included in the study from April to August 2008. Researchers used a survey designed by Lewin et al. that included 20 questions related to a patient׳s experience during a vaginal examination. five constructs (factors) related to a patient׳s experience of vaginal examination during labour were identified: Approval (Alpha Cronbach׳s 0.72), Perception (0.67), Rejection (0.40), Consent (0.51), and Stress (0.20). it was demonstrated the validity of the scale and its constructs used to obtain information related to vaginal examination during labour, including patients' experiences with examination and healthcare staff performance. utilisation of the scale will allow institutions to identify items that need improvement and address these areas in order to promote the best care for patients in labour. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, Amy B.; Stauffer, Philip H.; Reed, Donald T.
The primary objective of the experimental effort described here is to aid in understanding the complex nature of liquid, vapor, and solid transport occurring around heated nuclear waste in bedded salt. In order to gain confidence in the predictive capability of numerical models, experimental validation must be performed to ensure that (a) hydrological and physiochemical parameters and (b) processes are correctly simulated. The experiments proposed here are designed to study aspects of the system that have not been satisfactorily quantified in prior work. In addition to exploring the complex coupled physical processes in support of numerical model validation, lessons learnedmore » from these experiments will facilitate preparations for larger-scale experiments that may utilize similar instrumentation techniques.« less
Simulating Small-Scale Experiments of In-Tunnel Airblast Using STUN and ALE3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neuscamman, Stephanie; Glenn, Lewis; Schebler, Gregory
2011-09-12
This report details continuing validation efforts for the Sphere and Tunnel (STUN) and ALE3D codes. STUN has been validated previously for blast propagation through tunnels using several sets of experimental data with varying charge sizes and tunnel configurations, including the MARVEL nuclear driven shock tube experiment (Glenn, 2001). The DHS-funded STUNTool version is compared to experimental data and the LLNL ALE3D hydrocode. In this particular study, we compare the performance of the STUN and ALE3D codes in modeling an in-tunnel airblast to experimental results obtained by Lunderman and Ohrt in a series of small-scale high explosive experiments (1997).
The MCNP6 Analytic Criticality Benchmark Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
2016-06-16
Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less
Inter-Disciplinary Collaboration in Support of the Post-Standby TREAT Mission
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeHart, Mark; Baker, Benjamin; Ortensi, Javier
Although analysis methods have advanced significantly in the last two decades, high fidelity multi- physics methods for reactors systems have been under development for only a few years and are not presently mature nor deployed. Furthermore, very few methods provide the ability to simulate rapid transients in three dimensions. Data for validation of advanced time-dependent multi- physics is sparse; at TREAT, historical data were not collected for the purpose of validating three-dimensional methods, let alone multi-physics simulations. Existing data continues to be collected to attempt to simulate the behavior of experiments and calibration transients, but it will be insufficient formore » the complete validation of analysis methods used for TREAT transient simulations. Hence, a 2018 restart will most likely occur without the direct application of advanced modeling and simulation methods. At present, the current INL modeling and simulation team plans to work with TREAT operations staff in performing reactor simulations with MAMMOTH, in parallel with the software packages currently being used in preparation for core restart (e.g., MCNP5, RELAP5, ABAQUS). The TREAT team has also requested specific measurements to be performed during startup testing, currently scheduled to run from February to August of 2018. These startup measurements will be crucial in validating the new analysis methods in preparation for ultimate application for TREAT operations and experiment design. This document describes the collaboration between modeling and simulation staff and restart, operations, instrumentation and experiment development teams to be able to effectively interact and achieve successful validation work during restart testing.« less
NASA Technical Reports Server (NTRS)
Stefanescu, D. M.; Catalina, A. V.; Juretzko, Frank R.; Sen, Subhayu; Curreri, P. A.
2003-01-01
The objective of the work on Particle Engulfment and Pushing by Solidifying Interfaces (PEP) include: 1) to obtain fundamental understanding of the physics of particle pushing and engulfment, 2) to develop mathematical models to describe the phenomenon, and 3) to perform critical experiments in the microgravity environment of space to provide benchmark data for model validation. Successful completion of this project will yield vital information relevant to a diverse area of terrestrial applications. With PEP being a long term research effort, this report will focus on advances in the theoretical treatment of the solid/liquid interface interaction with an approaching particle, experimental validation of some aspects of the developed models, and the experimental design aspects of future experiments to be performed on board the International Space Station.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, C., E-mail: hansec@uw.edu; Columbia University, New York, New York 10027; Victor, B.
We present application of three scalar metrics derived from the Biorthogonal Decomposition (BD) technique to evaluate the level of agreement between macroscopic plasma dynamics in different data sets. BD decomposes large data sets, as produced by distributed diagnostic arrays, into principal mode structures without assumptions on spatial or temporal structure. These metrics have been applied to validation of the Hall-MHD model using experimental data from the Helicity Injected Torus with Steady Inductive helicity injection experiment. Each metric provides a measure of correlation between mode structures extracted from experimental data and simulations for an array of 192 surface-mounted magnetic probes. Numericalmore » validation studies have been performed using the NIMROD code, where the injectors are modeled as boundary conditions on the flux conserver, and the PSI-TET code, where the entire plasma volume is treated. Initial results from a comprehensive validation study of high performance operation with different injector frequencies are presented, illustrating application of the BD method. Using a simplified (constant, uniform density and temperature) Hall-MHD model, simulation results agree with experimental observation for two of the three defined metrics when the injectors are driven with a frequency of 14.5 kHz.« less
The Arthroscopic Surgical Skill Evaluation Tool (ASSET).
Koehler, Ryan J; Amsdell, Simon; Arendt, Elizabeth A; Bisson, Leslie J; Braman, Jonathan P; Bramen, Jonathan P; Butler, Aaron; Cosgarea, Andrew J; Harner, Christopher D; Garrett, William E; Olson, Tyson; Warme, Winston J; Nicandri, Gregg T
2013-06-01
Surgeries employing arthroscopic techniques are among the most commonly performed in orthopaedic clinical practice; however, valid and reliable methods of assessing the arthroscopic skill of orthopaedic surgeons are lacking. The Arthroscopic Surgery Skill Evaluation Tool (ASSET) will demonstrate content validity, concurrent criterion-oriented validity, and reliability when used to assess the technical ability of surgeons performing diagnostic knee arthroscopic surgery on cadaveric specimens. Cross-sectional study; Level of evidence, 3. Content validity was determined by a group of 7 experts using the Delphi method. Intra-articular performance of a right and left diagnostic knee arthroscopic procedure was recorded for 28 residents and 2 sports medicine fellowship-trained attending surgeons. Surgeon performance was assessed by 2 blinded raters using the ASSET. Concurrent criterion-oriented validity, interrater reliability, and test-retest reliability were evaluated. Content validity: The content development group identified 8 arthroscopic skill domains to evaluate using the ASSET. Concurrent criterion-oriented validity: Significant differences in the total ASSET score (P < .05) between novice, intermediate, and advanced experience groups were identified. Interrater reliability: The ASSET scores assigned by each rater were strongly correlated (r = 0.91, P < .01), and the intraclass correlation coefficient between raters for the total ASSET score was 0.90. Test-retest reliability: There was a significant correlation between ASSET scores for both procedures attempted by each surgeon (r = 0.79, P < .01). The ASSET appears to be a useful, valid, and reliable method for assessing surgeon performance of diagnostic knee arthroscopic surgery in cadaveric specimens. Studies are ongoing to determine its generalizability to other procedures as well as to the live operating room and other simulated environments.
Validation Database Based Thermal Analysis of an Advanced RPS Concept
NASA Technical Reports Server (NTRS)
Balint, Tibor S.; Emis, Nickolas D.
2006-01-01
Advanced RPS concepts can be conceived, designed and assessed using high-end computational analysis tools. These predictions may provide an initial insight into the potential performance of these models, but verification and validation are necessary and required steps to gain confidence in the numerical analysis results. This paper discusses the findings from a numerical validation exercise for a small advanced RPS concept, based on a thermal analysis methodology developed at JPL and on a validation database obtained from experiments performed at Oregon State University. Both the numerical and experimental configurations utilized a single GPHS module enabled design, resembling a Mod-RTG concept. The analysis focused on operating and environmental conditions during the storage phase only. This validation exercise helped to refine key thermal analysis and modeling parameters, such as heat transfer coefficients, and conductivity and radiation heat transfer values. Improved understanding of the Mod-RTG concept through validation of the thermal model allows for future improvements to this power system concept.
Modeling interfacial fracture in Sierra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Arthur A.; Ohashi, Yuki; Lu, Wei-Yang
2013-09-01
This report summarizes computational efforts to model interfacial fracture using cohesive zone models in the SIERRA/SolidMechanics (SIERRA/SM) finite element code. Cohesive surface elements were used to model crack initiation and propagation along predefined paths. Mesh convergence was observed with SIERRA/SM for numerous geometries. As the funding for this project came from the Advanced Simulation and Computing Verification and Validation (ASC V&V) focus area, considerable effort was spent performing verification and validation. Code verification was performed to compare code predictions to analytical solutions for simple three-element simulations as well as a higher-fidelity simulation of a double-cantilever beam. Parameter identification was conductedmore » with Dakota using experimental results on asymmetric double-cantilever beam (ADCB) and end-notched-flexure (ENF) experiments conducted under Campaign-6 funding. Discretization convergence studies were also performed with respect to mesh size and time step and an optimization study was completed for mode II delamination using the ENF geometry. Throughout this verification process, numerous SIERRA/SM bugs were found and reported, all of which have been fixed, leading to over a 10-fold increase in convergence rates. Finally, mixed-mode flexure experiments were performed for validation. One of the unexplained issues encountered was material property variability for ostensibly the same composite material. Since the variability is not fully understood, it is difficult to accurately assess uncertainty when performing predictions.« less
Validating models of target acquisition performance in the dismounted soldier context
NASA Astrophysics Data System (ADS)
Glaholt, Mackenzie G.; Wong, Rachel K.; Hollands, Justin G.
2018-04-01
The problem of predicting real-world operator performance with digital imaging devices is of great interest within the military and commercial domains. There are several approaches to this problem, including: field trials with imaging devices, laboratory experiments using imagery captured from these devices, and models that predict human performance based on imaging device parameters. The modeling approach is desirable, as both field trials and laboratory experiments are costly and time-consuming. However, the data from these experiments is required for model validation. Here we considered this problem in the context of dismounted soldiering, for which detection and identification of human targets are essential tasks. Human performance data were obtained for two-alternative detection and identification decisions in a laboratory experiment in which photographs of human targets were presented on a computer monitor and the images were digitally magnified to simulate range-to-target. We then compared the predictions of different performance models within the NV-IPM software package: Targeting Task Performance (TTP) metric model and the Johnson model. We also introduced a modification to the TTP metric computation that incorporates an additional correction for target angular size. We examined model predictions using NV-IPM default values for a critical model constant, V50, and we also considered predictions when this value was optimized to fit the behavioral data. When using default values, certain model versions produced a reasonably close fit to the human performance data in the detection task, while for the identification task all models substantially overestimated performance. When using fitted V50 values the models produced improved predictions, though the slopes of the performance functions were still shallow compared to the behavioral data. These findings are discussed in relation to the models' designs and parameters, and the characteristics of the behavioral paradigm.
ERIC Educational Resources Information Center
Isherwood, Mary; Johnson, Heather; Brundrett, Mark
2007-01-01
In 1998, when "Teachers--meeting the challenge of change" was published by the English Department for Education and Skills (DfES), one of the most fundamental reforms of the teaching profession was initiated--performance management. In 2005, all schools became subject to "Light Touch Validation" of their performance management…
Kelly, Laura; Ziebland, Sue; Jenkinson, Crispin
2015-11-01
Health-related websites have developed to be much more than information sites: they are used to exchange experiences and find support as well as information and advice. This paper documents the development of a tool to compare the potential consequences and experiences a person may encounter when using health-related websites. Questionnaire items were developed following a review of relevant literature and qualitative secondary analysis of interviews relating to experiences of health. Item reduction steps were performed on pilot survey data (n=167). Tests of validity and reliability were subsequently performed (n=170) to determine the psychometric properties of the questionnaire. Two independent item pools entered psychometric testing: (1) Items relating to general views of using the internet in relation to health and, (2) Items relating to the consequences of using a specific health-related website. Identified sub-scales were found to have high construct validity, internal consistency and test-retest reliability. Analyses confirmed good psychometric properties in the eHIQ-Part 1 (11 items) and the eHIQ-Part 2 (26 items). This tool will facilitate the measurement of the potential consequences of using websites containing different types of material (scientific facts and figures, blogs, experiences, images) across a range of health conditions. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Three atmospheric dispersion experiments involving oil fog plumes measured by lidar
NASA Technical Reports Server (NTRS)
Eberhard, W. L.; Mcnice, G. T.; Troxel, S. W.
1986-01-01
The Wave Propagation Lab. participated with the U.S. Environmental Protection Agency in a series of experiments with the goal of developing and validating dispersion models that perform substantially better that models currently available. The lidar systems deployed and the data processing procedures used in these experiments are briefly described. Highlights are presented of conclusions drawn thus far from the lidar data.
Rotational Dynamics with Tracker
ERIC Educational Resources Information Center
Eadkhong, T.; Rajsadorn, R.; Jannual, P.; Danworaphong, S.
2012-01-01
We propose the use of Tracker, freeware for video analysis, to analyse the moment of inertia ("I") of a cylindrical plate. Three experiments are performed to validate the proposed method. The first experiment is dedicated to find the linear coefficient of rotational friction ("b") for our system. By omitting the effect of such friction, we derive…
Improving STEM Program Quality in Out-of-School-Time: Tool Development and Validation
ERIC Educational Resources Information Center
Shah, Ashima Mathur; Wylie, Caroline; Gitomer, Drew; Noam, Gil
2018-01-01
In and out-of-school time (OST) experiences are viewed as complementary in contributing to students' interest, engagement, and performance in science, technology, engineering, and mathematics (STEM). While tools exist to measure quality in general afterschool settings and others to measure structured science classroom experiences, there is a need…
Automatic seed picking for brachytherapy postimplant validation with 3D CT images.
Zhang, Guobin; Sun, Qiyuan; Jiang, Shan; Yang, Zhiyong; Ma, Xiaodong; Jiang, Haisong
2017-11-01
Postimplant validation is an indispensable part in the brachytherapy technique. It provides the necessary feedback to ensure the quality of operation. The ability to pick implanted seed relates directly to the accuracy of validation. To address it, an automatic approach is proposed for picking implanted brachytherapy seeds in 3D CT images. In order to pick seed configuration (location and orientation) efficiently, the approach starts with the segmentation of seed from CT images using a thresholding filter which based on gray-level histogram. Through the process of filtering and denoising, the touching seed and single seed are classified. The true novelty of this approach is found in the application of the canny edge detection and improved concave points matching algorithm to separate touching seeds. Through the computation of image moments, the seed configuration can be determined efficiently. Finally, two different experiments are designed to verify the performance of the proposed approach: (1) physical phantom with 60 model seeds, and (2) patient data with 16 cases. Through assessment of validated results by a medical physicist, the proposed method exhibited promising results. Experiment on phantom demonstrates that the error of seed location and orientation is within ([Formula: see text]) mm and ([Formula: see text])[Formula: see text], respectively. In addition, the most seed location and orientation error is controlled within 0.8 mm and 3.5[Formula: see text] in all cases, respectively. The average process time of seed picking is 8.7 s per 100 seeds. In this paper, an automatic, efficient and robust approach, performed on CT images, is proposed to determine the implanted seed location as well as orientation in a 3D workspace. Through the experiments with phantom and patient data, this approach also successfully exhibits good performance.
A High Performance Pulsatile Pump for Aortic Flow Experiments in 3-Dimensional Models.
Chaudhury, Rafeed A; Atlasman, Victor; Pathangey, Girish; Pracht, Nicholas; Adrian, Ronald J; Frakes, David H
2016-06-01
Aortic pathologies such as coarctation, dissection, and aneurysm represent a particularly emergent class of cardiovascular diseases. Computational simulations of aortic flows are growing increasingly important as tools for gaining understanding of these pathologies, as well as for planning their surgical repair. In vitro experiments are required to validate the simulations against real world data, and the experiments require a pulsatile flow pump system that can provide physiologic flow conditions characteristic of the aorta. We designed a newly capable piston-based pulsatile flow pump system that can generate high volume flow rates (850 mL/s), replicate physiologic waveforms, and pump high viscosity fluids against large impedances. The system is also compatible with a broad range of fluid types, and is operable in magnetic resonance imaging environments. Performance of the system was validated using image processing-based analysis of piston motion as well as particle image velocimetry. The new system represents a more capable pumping solution for aortic flow experiments than other available designs, and can be manufactured at a relatively low cost.
Analytic Modeling of Pressurization and Cryogenic Propellant Conditions for Lunar Landing Vehicle
NASA Technical Reports Server (NTRS)
Corpening, Jeremy
2010-01-01
This slide presentation reviews the development, validation and application of the model to the Lunar Landing Vehicle. The model named, Computational Propellant and Pressurization Program -- One Dimensional (CPPPO), is used to model in this case cryogenic propellant conditions of the Altair Lunar lander. The validation of CPPPO was accomplished via comparison to an existing analytic model (i.e., ROCETS), flight experiment and ground experiments. The model was used to the Lunar Landing Vehicle perform a parametric analysis on pressurant conditions and to examine the results of unequal tank pressurization and draining for multiple tank designs.
NASA IN-STEP Cryo System Experiment flight test
NASA Astrophysics Data System (ADS)
Russo, S. C.; Sugimura, R. S.
The Cryo System Experiment (CSE), a NASA In-Space Technology Experiments Program (IN-STEP) flight experiment, was flown on Space Shuttle Discovery (STS 63) in February 1995. The experiment was developed by Hughes Aircraft Company to validate in zero- g space a 65 K cryogenic system for focal planes, optics, instruments or other equipment (gamma-ray spectrometers and infrared and submillimetre imaging instruments) that requires continuous cryogenic cooling. The CSE is funded by the NASA Office of Advanced Concepts and Technology's IN-STEP and managed by the Jet Propulsion Laboratory (JPL). The overall goal of the CSE was to validate and characterize the on-orbit performance of the two thermal management technologies that comprise a hybrid cryogenic system. These thermal management technologies consist of (1) a second-generation long-life, low-vibration, Stirling-cycle 65 K cryocooler that was used to cool a simulated thermal energy storage device (TRP) and (2) a diode oxygen heat pipe thermal switch that enables physical separation between a cryogenic refrigerator and a TRP. All CSE experiment objectives and 100% of the experiment success criteria were achieved. The level of confidence provided by this flight experiment is an important NASA and Department of Defense (DoD) milestone prior to multi-year mission commitment. Presented are generic lessons learned from the system integration of cryocoolers for a flight experiment and the recorded zero- g performance of the Stirling cryocooler and the diode oxygen heat pipe.
TU-D-201-05: Validation of Treatment Planning Dose Calculations: Experience Working with MPPG 5.a
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xue, J; Park, J; Kim, L
2016-06-15
Purpose: Newly published medical physics practice guideline (MPPG 5.a.) has set the minimum requirements for commissioning and QA of treatment planning dose calculations. We present our experience in the validation of a commercial treatment planning system based on MPPG 5.a. Methods: In addition to tests traditionally performed to commission a model-based dose calculation algorithm, extensive tests were carried out at short and extended SSDs, various depths, oblique gantry angles and off-axis conditions to verify the robustness and limitations of a dose calculation algorithm. A comparison between measured and calculated dose was performed based on validation tests and evaluation criteria recommendedmore » by MPPG 5.a. An ion chamber was used for the measurement of dose at points of interest, and diodes were used for photon IMRT/VMAT validations. Dose profiles were measured with a three-dimensional scanning system and calculated in the TPS using a virtual water phantom. Results: Calculated and measured absolute dose profiles were compared at each specified SSD and depth for open fields. The disagreement is easily identifiable with the difference curve. Subtle discrepancy has revealed the limitation of the measurement, e.g., a spike at the high dose region and an asymmetrical penumbra observed on the tests with an oblique MLC beam. The excellent results we had (> 98% pass rate on 3%/3mm gamma index) on the end-to-end tests for both IMRT and VMAT are attributed to the quality beam data and the good understanding of the modeling. The limitation of the model and the uncertainty of measurement were considered when comparing the results. Conclusion: The extensive tests recommended by the MPPG encourage us to understand the accuracy and limitations of a dose algorithm as well as the uncertainty of measurement. Our experience has shown how the suggested tests can be performed effectively to validate dose calculation models.« less
VALUE - A Framework to Validate Downscaling Approaches for Climate Change Studies
NASA Astrophysics Data System (ADS)
Maraun, Douglas; Widmann, Martin; Gutiérrez, José M.; Kotlarski, Sven; Chandler, Richard E.; Hertig, Elke; Wibig, Joanna; Huth, Radan; Wilke, Renate A. I.
2015-04-01
VALUE is an open European network to validate and compare downscaling methods for climate change research. VALUE aims to foster collaboration and knowledge exchange between climatologists, impact modellers, statisticians, and stakeholders to establish an interdisciplinary downscaling community. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. Here, we present the key ingredients of this framework. VALUE's main approach to validation is user-focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur: what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Do methods fail in representing regional climate change? How is the overall representation of regional climate, including errors inherited from global climate models? The framework will be the basis for a comprehensive community-open downscaling intercomparison study, but is intended also to provide general guidance for other validation studies.
VALUE: A framework to validate downscaling approaches for climate change studies
NASA Astrophysics Data System (ADS)
Maraun, Douglas; Widmann, Martin; Gutiérrez, José M.; Kotlarski, Sven; Chandler, Richard E.; Hertig, Elke; Wibig, Joanna; Huth, Radan; Wilcke, Renate A. I.
2015-01-01
VALUE is an open European network to validate and compare downscaling methods for climate change research. VALUE aims to foster collaboration and knowledge exchange between climatologists, impact modellers, statisticians, and stakeholders to establish an interdisciplinary downscaling community. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. In this paper, we present the key ingredients of this framework. VALUE's main approach to validation is user- focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur: what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Do methods fail in representing regional climate change? How is the overall representation of regional climate, including errors inherited from global climate models? The framework will be the basis for a comprehensive community-open downscaling intercomparison study, but is intended also to provide general guidance for other validation studies.
Kim, Eun-Mi; Kim, Sun-Aee; Lee, Ju-Ry; Burlison, Jonathan D; Oh, Eui Geum
2018-02-13
"Second victims" are defined as healthcare professionals whose wellness is influenced by adverse clinical events. The Second Victim Experience and Support Tool (SVEST) was used to measure the second-victim experience and quality of support resources. Although the reliability and validity of the original SVEST have been validated, those for the Korean tool have not been validated. The aim of the study was to evaluate the psychometric properties of the Korean version of the SVEST. The study included 305 clinical nurses as participants. The SVEST was translated into Korean via back translation. Content validity was assessed by seven experts, and test-retest reliability was evaluated by 30 clinicians. Internal consistency and construct validity were assessed via confirmatory factor analysis. The analyses were performed using SPSS 23.0 and STATA 13.0 software. The content validity index value demonstrated validity; item- and scale-level content validity index values were both 0.95. Test-retest reliability and internal consistency reliability were satisfactory: the intraclass consistent coefficient was 0.71, and Cronbach α values ranged from 0.59 to 0.87. The CFA showed a significantly good fit for an eight-factor structure (χ = 578.21, df = 303, comparative fit index = 0.92, Tucker-Lewis index = 0.90, root mean square error of approximation = 0.05). The K-SVEST demonstrated good psychometric properties and adequate validity and reliability. The results showed that the Korean version of SVEST demonstrated the extent of second victimhood and support resources in Korean healthcare workers and could aid in the development of support programs and evaluation of their effectiveness.
USDA-ARS?s Scientific Manuscript database
An accurate and simple-to-perform new version of a competitive ELISA (cELISA) kit that became commercially available in 2015 for testing of cattle for antibody to Anaplasma marginale was validated for detection of Anaplasma ovis antibody in domestic sheep. True positives and negatives were identifie...
Perceptions vs Reality: A Longitudinal Experiment in Influenced Judgement Performance
2003-03-25
validity were manifested equally between treatment and control groups , thereby lending further validity to the experimental research design . External...Stanley (1975) identify this as a True Experimental Design : Pretest- Posttest Control Group Design . However, due to the longitudinal aspect required to...1975:43). Nonequivalence will be ruled out as pretest equivalence is shown between treatment and control groups (1975:47). For quasi
NASA Astrophysics Data System (ADS)
Smit, H. G.; Straeter, W.; Helten, M.; Kley, D.
2002-05-01
Up to an altitude of about 20 km ozone sondes constitute the most important data source with long term data coverage for the derivation of ozone trends with sufficient vertical resolution, particularly in the important altitude region around the tropopause. In this region and also above in lower/middle stratosphere up to 30-35 km altitude ozone sondes are of crucial importance to validate and evaluate satellite measurements, particularly for their long term stability. Each ozone sounding is made with an individual disposable instrument and, therefore, have to be characterized well prior to flight. Therefore, quality assurance of ozone sonde performance is a pre-requisite. As part of the quality assurance (QA) plan for ozone sondes that are in routine use in the Global Atmosphere Watch program of the World Meteorological Organization the environmental simulation chamber at the Research Centre Juelich (Germany) is established as World Calibration Centre for Ozone Sondes. The facility enables control of pressure, temperature and ozone concentration and can simulate flight conditions of ozone soundings up to an altitude of 35 km, whereby an accurate UV-photometer serves as a reference. In the scope of this QA-plan for ozonesondes since 1996 several JOSIE (= Juelich Ozone Sonde Intercomparison Experiment) experiments to assess the performance of ozone sondes of different types and manufacturers have been conducted at the calibration facility. We will present an overview of the results obtained from the different JOSIE experiments. The results will be discussed with regard to the use of ozone sondes to validate satellite measurements. Special attention will be paid to the influence of operating procedures on the performance of sondes and the need for standardization to assure ozone sounding data of sufficient quality to use for satellite validations.
Assessment of construct validity of a virtual reality laparoscopy simulator.
Rosenthal, Rachel; Gantert, Walter A; Hamel, Christian; Hahnloser, Dieter; Metzger, Juerg; Kocher, Thomas; Vogelbach, Peter; Scheidegger, Daniel; Oertli, Daniel; Clavien, Pierre-Alain
2007-08-01
The aim of this study was to assess whether virtual reality (VR) can discriminate between the skills of novices and intermediate-level laparoscopic surgical trainees (construct validity), and whether the simulator assessment correlates with an expert's evaluation of performance. Three hundred and seven (307) participants of the 19th-22nd Davos International Gastrointestinal Surgery Workshops performed the clip-and-cut task on the Xitact LS 500 VR simulator (Xitact S.A., Morges, Switzerland). According to their previous experience in laparoscopic surgery, participants were assigned to the basic course (BC) or the intermediate course (IC). Objective performance parameters recorded by the simulator were compared to the standardized assessment by the course instructors during laparoscopic pelvitrainer and conventional surgery exercises. IC participants performed significantly better on the VR simulator than BC participants for the task completion time as well as the economy of movement of the right instrument, not the left instrument. Participants with maximum scores in the pelvitrainer cholecystectomy task performed the VR trial significantly faster, compared to those who scored less. In the conventional surgery task, a significant difference between those who scored the maximum and those who scored less was found not only for task completion time, but also for economy of movement of the right instrument. VR simulation provides a valid assessment of psychomotor skills and some basic aspects of spatial skills in laparoscopic surgery. Furthermore, VR allows discrimination between trainees with different levels of experience in laparoscopic surgery establishing construct validity for the Xitact LS 500 clip-and-cut task. Virtual reality may become the gold standard to assess and monitor surgical skills in laparoscopic surgery.
Endogenous protein "barcode" for data validation and normalization in quantitative MS analysis.
Lee, Wooram; Lazar, Iulia M
2014-07-01
Quantitative proteomic experiments with mass spectrometry detection are typically conducted by using stable isotope labeling and label-free quantitation approaches. Proteins with housekeeping functions and stable expression level such actin, tubulin, and glyceraldehyde-3-phosphate dehydrogenase are frequently used as endogenous controls. Recent studies have shown that the expression level of such common housekeeping proteins is, in fact, dependent on various factors such as cell type, cell cycle, or disease status and can change in response to a biochemical stimulation. The interference of such phenomena can, therefore, substantially compromise their use for data validation, alter the interpretation of results, and lead to erroneous conclusions. In this work, we advance the concept of a protein "barcode" for data normalization and validation in quantitative proteomic experiments. The barcode comprises a novel set of proteins that was generated from cell cycle experiments performed with MCF7, an estrogen receptor positive breast cancer cell line, and MCF10A, a nontumorigenic immortalized breast cell line. The protein set was selected from a list of ~3700 proteins identified in different cellular subfractions and cell cycle stages of MCF7/MCF10A cells, based on the stability of spectral count data generated with an LTQ ion trap mass spectrometer. A total of 11 proteins qualified as endogenous standards for the nuclear and 62 for the cytoplasmic barcode, respectively. The validation of the protein sets was performed with a complementary SKBR3/Her2+ cell line.
Replicating the Z iron opacity experiments on the NIF
NASA Astrophysics Data System (ADS)
Perry, T. S.; Heeter, R. F.; Opachich, Y. P.; Ross, P. W.; Kline, J. L.; Flippo, K. A.; Sherrill, M. E.; Dodd, E. S.; DeVolder, B. G.; Cardenas, T.; Archuleta, T. N.; Craxton, R. S.; Zhang, R.; McKenty, P. W.; Garcia, E. M.; Huffman, E. J.; King, J. A.; Ahmed, M. F.; Emig, J. A.; Ayers, S. L.; Barrios, M. A.; May, M. J.; Schneider, M. B.; Liedahl, D. A.; Wilson, B. G.; Urbatsch, T. J.; Iglesias, C. A.; Bailey, J. E.; Rochau, G. A.
2017-06-01
X-ray opacity is a crucial factor of all radiation-hydrodynamics calculations, yet it is one of the least validated of the material properties in the simulation codes. Recent opacity experiments at the Sandia Z-machine have shown up to factors of two discrepancies between theory and experiment, casting doubt on the validity of the opacity models. Therefore, a new experimental opacity platform is being developed on the National Ignition Facility (NIF) not only to verify the Z-machine experimental results but also to extend the experiments to other temperatures and densities. The first experiments will be directed towards measuring the opacity of iron at a temperature of ∼160 eV and an electron density of ∼7 × 1021 cm-3. Preliminary experiments on NIF have demonstrated the ability to create a sufficiently bright point backlighter using an imploding plastic capsule and also a hohlraum that can heat the opacity sample to the desired conditions. The first of these iron opacity experiments is expected to be performed in 2017.
Lau, Lily; Basso, Michael R; Estevis, Eduardo; Miller, Ashley; Whiteside, Douglas M; Combs, Dennis; Arentsen, Timothy J
2017-11-01
Performance validity tests (PVTs) and symptom validity tests (SVTs) are often administered during neuropsychological evaluations. Examinees may be coached to avoid detection by measures of response validity. Relatively little research has evaluated whether graduated levels of coaching has differential effects upon PVT and SVT performance. Accordingly, the present experiment evaluated the effect of graduated levels of coaching upon the classification accuracy of commonly used PVTs and SVTs and the currently accepted criterion of failing two or more PVTs or SVTs. Participants simulated symptoms associated with mild traumatic brain injury (TBI). One group was provided superficial information concerning cognitive, emotional, and physical symptoms. Another group was provided detailed information about such symptoms. A third group was provided detailed information about symptoms and guidance how to evade detection by PVTs. These groups were compared to an honest-responding group. Extending prior experiments, stand-alone and embedded PVT measures were administered in addition to SVTs. The three simulator groups were readily identified by PVTs and SVTs, but a meaningful minority of those provided test-taking strategies eluded detection. The Word Memory Test emerged as the most sensitive indicator of simulated mild TBI symptoms. PVTs achieved more sensitive detection of simulated head injury status than SVTs. Individuals coached to modify test-taking performance were marginally more successful in eluding detection by PVTs and SVTs than those coached with respect to TBI symptoms only. When the criterion of failing two or more PVTs or SVTs was applied, only 5% eluded detection.
Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.
NASA Astrophysics Data System (ADS)
Macias, J.; Escalante, C.; Castro, M. J.
2017-12-01
Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).
Moving to Capture Children’s Attention: Developing a Methodology for Measuring Visuomotor Attention
Coats, Rachel O.; Mushtaq, Faisal; Williams, Justin H. G.; Aucott, Lorna S.; Mon-Williams, Mark
2016-01-01
Attention underpins many activities integral to a child’s development. However, methodological limitations currently make large-scale assessment of children’s attentional skill impractical, costly and lacking in ecological validity. Consequently we developed a measure of ‘Visual Motor Attention’ (VMA)—a construct defined as the ability to sustain and adapt visuomotor behaviour in response to task-relevant visual information. In a series of experiments, we evaluated the capability of our method to measure attentional processes and their contributions in guiding visuomotor behaviour. Experiment 1 established the method’s core features (ability to track stimuli moving on a tablet-computer screen with a hand-held stylus) and demonstrated its sensitivity to principled manipulations in adults’ attentional load. Experiment 2 standardised a format suitable for use with children and showed construct validity by capturing developmental changes in executive attention processes. Experiment 3 tested the hypothesis that children with and without coordination difficulties would show qualitatively different response patterns, finding an interaction between the cognitive and motor factors underpinning responses. Experiment 4 identified associations between VMA performance and existing standardised attention assessments and thereby confirmed convergent validity. These results establish a novel approach to measuring childhood attention that can produce meaningful functional assessments that capture how attention operates in an ecologically valid context (i.e. attention's specific contribution to visuomanual action). PMID:27434198
The Arthroscopic Surgical Skill Evaluation Tool (ASSET)
Koehler, Ryan J.; Amsdell, Simon; Arendt, Elizabeth A; Bisson, Leslie J; Braman, Jonathan P; Butler, Aaron; Cosgarea, Andrew J; Harner, Christopher D; Garrett, William E; Olson, Tyson; Warme, Winston J.; Nicandri, Gregg T.
2014-01-01
Background Surgeries employing arthroscopic techniques are among the most commonly performed in orthopaedic clinical practice however, valid and reliable methods of assessing the arthroscopic skill of orthopaedic surgeons are lacking. Hypothesis The Arthroscopic Surgery Skill Evaluation Tool (ASSET) will demonstrate content validity, concurrent criterion-oriented validity, and reliability, when used to assess the technical ability of surgeons performing diagnostic knee arthroscopy on cadaveric specimens. Study Design Cross-sectional study; Level of evidence, 3 Methods Content validity was determined by a group of seven experts using a Delphi process. Intra-articular performance of a right and left diagnostic knee arthroscopy was recorded for twenty-eight residents and two sports medicine fellowship trained attending surgeons. Subject performance was assessed by two blinded raters using the ASSET. Concurrent criterion-oriented validity, inter-rater reliability, and test-retest reliability were evaluated. Results Content validity: The content development group identified 8 arthroscopic skill domains to evaluate using the ASSET. Concurrent criterion-oriented validity: Significant differences in total ASSET score (p<0.05) between novice, intermediate, and advanced experience groups were identified. Inter-rater reliability: The ASSET scores assigned by each rater were strongly correlated (r=0.91, p <0.01) and the intra-class correlation coefficient between raters for the total ASSET score was 0.90. Test-retest reliability: there was a significant correlation between ASSET scores for both procedures attempted by each individual (r = 0.79, p<0.01). Conclusion The ASSET appears to be a useful, valid, and reliable method for assessing surgeon performance of diagnostic knee arthroscopy in cadaveric specimens. Studies are ongoing to determine its generalizability to other procedures as well as to the live OR and other simulated environments. PMID:23548808
Lay out, test verification and in orbit performance of HELIOS a temperature control system
NASA Technical Reports Server (NTRS)
Brungs, W.
1975-01-01
HELIOS temperature control system is described. The main design features and the impact of interactions between experiment, spacecraft system, and temperature control system requirements on the design are discussed. The major limitations of the thermal design regarding a closer sun approach are given and related to test experience and performance data obtained in orbit. Finally the validity of the test results achieved with prototype and flight spacecraft is evaluated by comparison between test data, orbit temperature predictions and flight data.
Neutron streaming studies along JET shielding penetrations
NASA Astrophysics Data System (ADS)
Stamatelatos, Ion E.; Vasilopoulou, Theodora; Batistoni, Paola; Obryk, Barbara; Popovichev, Sergey; Naish, Jonathan
2017-09-01
Neutronic benchmark experiments are carried out at JET aiming to assess the neutronic codes and data used in ITER analysis. Among other activities, experiments are performed in order to validate neutron streaming simulations along long penetrations in the JET shielding configuration. In this work, neutron streaming calculations along the JET personnel entrance maze are presented. Simulations were performed using the MCNP code for Deuterium-Deuterium and Deuterium- Tritium plasma sources. The results of the simulations were compared against experimental data obtained using thermoluminescence detectors and activation foils.
Fernandes, Tânia; Araújo, Susana; Sucena, Ana; Reis, Alexandra; Castro, São Luís
2017-02-01
Reading is a central cognitive domain, but little research has been devoted to standardized tests for adults. We, thus, examined the psychometric properties of the 1-min version of Teste de Idade de Leitura (Reading Age Test; 1-min TIL), the Portuguese version of Lobrot L3 test, in three experiments with college students: typical readers in Experiment 1A and B, dyslexic readers and chronological age controls in Experiment 2. In Experiment 1A, test-retest reliability and convergent validity were evaluated in 185 students. Reliability was >.70, and phonological decoding underpinned 1-min TIL. In Experiment 1B, internal consistency was assessed by presenting two 45-s versions of the test to 19 students, and performance in these versions was significantly associated (r = .78). In Experiment 2, construct validity, criterion validity and clinical utility of 1-min TIL were investigated. A multiple regression analysis corroborated construct validity; both phonological decoding and listening comprehension were reliable predictors of 1-min TIL scores. Logistic regression and receiver operating characteristics analyses revealed the high accuracy of this test in distinguishing dyslexic from typical readers. Therefore, the 1-min TIL, which assesses reading comprehension and potential reading difficulties in college students, has the necessary psychometric properties to become a useful screening instrument in neuropsychological assessment and research. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Nuclear Energy Knowledge and Validation Center (NEKVaC) Needs Workshop Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gougar, Hans
2015-02-01
The Department of Energy (DOE) has made significant progress developing simulation tools to predict the behavior of nuclear systems with greater accuracy and of increasing our capability to predict the behavior of these systems outside of the standard range of applications. These analytical tools require a more complex array of validation tests to accurately simulate the physics and multiple length and time scales. Results from modern simulations will allow experiment designers to narrow the range of conditions needed to bound system behavior and to optimize the deployment of instrumentation to limit the breadth and cost of the campaign. Modern validation,more » verification and uncertainty quantification (VVUQ) techniques enable analysts to extract information from experiments in a systematic manner and provide the users with a quantified uncertainty estimate. Unfortunately, the capability to perform experiments that would enable taking full advantage of the formalisms of these modern codes has progressed relatively little (with some notable exceptions in fuels and thermal-hydraulics); the majority of the experimental data available today is the "historic" data accumulated over the last decades of nuclear systems R&D. A validated code-model is a tool for users. An unvalidated code-model is useful for code developers to gain understanding, publish research results, attract funding, etc. As nuclear analysis codes have become more sophisticated, so have the measurement and validation methods and the challenges that confront them. A successful yet cost-effective validation effort requires expertise possessed only by a few, resources possessed only by the well-capitalized (or a willing collective), and a clear, well-defined objective (validating a code that is developed to satisfy the need(s) of an actual user). To that end, the Idaho National Laboratory established the Nuclear Energy Knowledge and Validation Center to address the challenges of modern code validation and to manage the knowledge from past, current, and future experimental campaigns. By pulling together the best minds involved in code development, experiment design, and validation to establish and disseminate best practices and new techniques, the Nuclear Energy Knowledge and Validation Center (NEKVaC or the ‘Center’) will be a resource for industry, DOE Programs, and academia validation efforts.« less
2011-01-01
Background The Child Perception Questionnaire (CPQ11-14) is a self-report instrument developed to measure oral-health-related quality of life (OHRQoL) in 11-14-year-olds. Earlier reports confirm that the 16-item short-form version performs adequately, but there is a need to determine the measure's validity and properties in larger and more diverse samples and settings. Aim The objective of this study was to examine the performance of the 16-item short-form impact version of the CPQ11-14 in different communities and cultures with diverse caries experience. Method Cross-sectional epidemiological surveys of child oral health were conducted in two regions of New Zealand, one region in Brunei, and one in Brazil. Children were examined for dental caries (following WHO guidelines), and OHRQoL was measured using the 16-item short-form item-impact version of the CPQ11-14, along with two global questions on OHRQoL. Children in the 20% with the greatest caries experience (DMF score) were categorised as the highest caries quintile. Construct validity was evaluated by comparing the mean scale scores across the categories of caries experience; correlational construct validity was assessed by comparing mean scores and children's global ratings of oral health and well-being. Results There were substantial variations in caries experience among the different communities (from 1.8 in Otago to 4.9 in Northland) and in mean CPQ11-14 scores (from 11.5 in Northland to 16.8 in Brunei). In all samples, those in the most severe caries experience quintile had higher mean CPQ11-14 scores than those who were caries-free (P < 0.05). There were also greater CPQ scores in those with worse self-rated oral health, with the Otago sample presenting the most marked gradient across the response categories for self-rated oral health, from 'Excellent' to 'Fair/Poor' (9.6 to 19.7 respectively). Conclusion The findings suggest that the 16-item short-form item impact version of the CPQ11-14 performs well across diverse cultures and levels of caries experience. Reasons for the differences in mean CPQ scores among the communities are unclear and may reflect subtle socio-cultural differences in subjective oral health among these populations, but elucidating these requires further exploration of the face and content validity of the measure in different populations. PMID:21649928
Nadkarni, Lindsay D; Roskind, Cindy G; Auerbach, Marc A; Calhoun, Aaron W; Adler, Mark D; Kessler, David O
2018-04-01
The aim of this study was to assess the validity of a formative feedback instrument for leaders of simulated resuscitations. This is a prospective validation study with a fully crossed (person × scenario × rater) study design. The Concise Assessment of Leader Management (CALM) instrument was designed by pediatric emergency medicine and graduate medical education experts to be used off the shelf to evaluate and provide formative feedback to resuscitation leaders. Four experts reviewed 16 videos of in situ simulated pediatric resuscitations and scored resuscitation leader performance using the CALM instrument. The videos consisted of 4 pediatric emergency department resuscitation teams each performing in 4 pediatric resuscitation scenarios (cardiac arrest, respiratory arrest, seizure, and sepsis). We report on content and internal structure (reliability) validity of the CALM instrument. Content validity was supported by the instrument development process that involved professional experience, expert consensus, focused literature review, and pilot testing. Internal structure validity (reliability) was supported by the generalizability analysis. The main component that contributed to score variability was the person (33%), meaning that individual leaders performed differently. The rater component had almost zero (0%) contribution to variance, which implies that raters were in agreement and argues for high interrater reliability. These results provide initial evidence to support the validity of the CALM instrument as a reliable assessment instrument that can facilitate formative feedback to leaders of pediatric simulated resuscitations.
2012-01-01
Background A father’s experience of the birth of his first child is important not only for his birth-giving partner but also for the father himself, his relationship with the mother and the newborn. No validated questionnaire assessing first-time fathers' experiences during childbirth is currently available. Hence, the aim of this study was to develop and validate an instrument to assess first-time fathers’ experiences of childbirth. Method Domains and items were initially derived from interviews with first-time fathers, and supplemented by a literature search and a focus group interview with midwives. The comprehensibility, comprehension and relevance of the items were evaluated by four paternity research experts and a preliminary questionnaire was pilot tested in eight first-time fathers. A revised questionnaire was completed by 200 first-time fathers (response rate = 81%) Exploratory factor analysis using principal component analysis with varimax rotation was performed and multitrait scaling analysis was used to test scaling assumptions. External validity was assessed by means of known-groups analysis. Results Factor analysis yielded four factors comprising 22 items and accounting 48% of the variance. The domains found were Worry, Information, Emotional support and Acceptance. Multitrait analysis confirmed the convergent and discriminant validity of the domains; however, Cronbach’s alpha did not meet conventional reliability standards in two domains. The questionnaire was sensitive to differences between groups of fathers hypothesized to differ on important socio demographic or clinical variables. Conclusions The questionnaire adequately measures important dimensions of first-time fathers’ childbirth experience and may be used to assess aspects of fathers’ experiences during childbirth. To obtain the FTFQ and permission for its use, please contact the corresponding author. PMID:22594834
Premberg, Åsa; Taft, Charles; Hellström, Anna-Lena; Berg, Marie
2012-05-17
A father's experience of the birth of his first child is important not only for his birth-giving partner but also for the father himself, his relationship with the mother and the newborn. No validated questionnaire assessing first-time fathers' experiences during childbirth is currently available. Hence, the aim of this study was to develop and validate an instrument to assess first-time fathers' experiences of childbirth. Domains and items were initially derived from interviews with first-time fathers, and supplemented by a literature search and a focus group interview with midwives. The comprehensibility, comprehension and relevance of the items were evaluated by four paternity research experts and a preliminary questionnaire was pilot tested in eight first-time fathers. A revised questionnaire was completed by 200 first-time fathers (response rate = 81%) Exploratory factor analysis using principal component analysis with varimax rotation was performed and multitrait scaling analysis was used to test scaling assumptions. External validity was assessed by means of known-groups analysis. Factor analysis yielded four factors comprising 22 items and accounting 48% of the variance. The domains found were Worry, Information, Emotional support and Acceptance. Multitrait analysis confirmed the convergent and discriminant validity of the domains; however, Cronbach's alpha did not meet conventional reliability standards in two domains. The questionnaire was sensitive to differences between groups of fathers hypothesized to differ on important socio demographic or clinical variables. The questionnaire adequately measures important dimensions of first-time fathers' childbirth experience and may be used to assess aspects of fathers' experiences during childbirth. To obtain the FTFQ and permission for its use, please contact the corresponding author.
A Comprehensive Validation Approach Using The RAVEN Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua J
2015-06-01
The RAVEN computer code , developed at the Idaho National Laboratory, is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. RAVEN is a multi-purpose probabilistic and uncertainty quantification platform, capable to communicate with any system code. A natural extension of the RAVEN capabilities is the imple- mentation of an integrated validation methodology, involving several different metrics, that represent an evolution of the methods currently used in the field. The state-of-art vali- dation approaches use neither exploration of the input space through sampling strategies, nor a comprehensive variety of metrics neededmore » to interpret the code responses, with respect experimental data. The RAVEN code allows to address both these lacks. In the following sections, the employed methodology, and its application to the newer developed thermal-hydraulic code RELAP-7, is reported.The validation approach has been applied on an integral effect experiment, representing natu- ral circulation, based on the activities performed by EG&G Idaho. Four different experiment configurations have been considered and nodalized.« less
Measuring Student Variables Useful in the Study of Performance in an Online Learning Environment.
ERIC Educational Resources Information Center
Kennedy, Cathleen A.
This paper discusses the measurement of unobservable or latent variables of students and how they contribute to learning in an online environment. It also examines the construct validity of two questionnaires: the College Experience Survey and the Computer Experience Study, which both measure different aspects of student attitudes and behavior…
Schoenbrunner, Anna R; Kelley, Kristen D; Buckstaff, Taylor; McIntyre, Joyce K; Sigler, Alicia; Gosman, Amanda A
2018-05-01
Mexican cleft surgeons provide multidisciplinary comprehensive cleft lip and palate care to children in Mexico. Many Mexican cleft surgeons have extensive experience with foreign, visiting surgeons. The purpose of this study was to characterize Mexican cleft surgeons' domestic and volunteer practice and to learn more about Mexican cleft surgeons' experience with visiting surgeons. A cross-sectional validated e-mail survey tool was sent to Mexican cleft surgeons through 2 Mexican plastic surgery societies and the Asociación Mexicana de Labio y Paladar Hendido y Anomalías Craneofaciales, the national cleft palate society that includes plastic and maxillofacial surgeons who specialize in cleft surgery. We utilized validated survey methodology, including neutral fact-based questions and repeated e-mails to survey nonresponders to maximize validity of statistical data; response rate was 30.6% (n = 81). Mexican cleft surgeons performed, on average, 37.7 primary palate repairs per year with an overall complication rate of 2.5%; 34.6% (n = 28) of respondents had direct experience with patients operated on by visiting surgeons; 53.6% of these respondents performed corrective surgery because of complications from visiting surgeons. Respondents rated 48% of the functional outcomes of visiting surgeons as "acceptable," whereas 43% rated aesthetic outcomes of visiting surgeons as "poor"; 73.3% of respondents were never paid for the corrective surgeries they performed. Thirty-three percent of Mexican cleft surgeons believe that there is a role for educational collaboration with visiting surgeons. Mexican cleft surgeons have a high volume of primary cleft palate repairs in their domestic practice with good outcomes. Visiting surgeons may play an important role in Mexican cleft care through educational collaborations that complement the strengths of Mexican cleft surgeons.
Objective assessment of laparoscopic skills using a virtual reality stimulator.
Eriksen, J R; Grantcharov, T
2005-09-01
Virtual reality simulation has a great potential as a training and assessment tool of laparoscopic skills. The study was carried out to investigate whether the LapSim system (Surgical Science Ltd., Gothenburg, Sweden) was able to differentiate between subjects with different laparoscopic experience and thus to demonstrate its construct validity. Subjects 24 were divided into two groups: experienced (performed > 100 laparoscopic procedures, n = 10) and beginners (performed <10 laparoscopic procedures, n = 14). Assessment of laparoscopic skills was based on parameters measured by the computer system. Experienced surgeons performed consistently better than the residents. Significant differences in the parameters time and economy of motion existed between the two groups in seven of seven tasks. Regarding error parameters, differences existed in most but not all tasks. LapSim was able to differentiate between subjects with different laparoscopic experience. This indicates that the system measures skills relevant for laparoscopic surgery and can be used in training programs as a valid assessment tool.
Pilot In-Trail Procedure Validation Simulation Study
NASA Technical Reports Server (NTRS)
Bussink, Frank J. L.; Murdoch, Jennifer L.; Chamberlain, James P.; Chartrand, Ryan; Jones, Kenneth M.
2008-01-01
A Human-In-The-Loop experiment was conducted at the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) to investigate the viability of the In-Trail Procedure (ITP) concept from a flight crew perspective, by placing participating airline pilots in a simulated oceanic flight environment. The test subject pilots used new onboard avionics equipment that provided improved information about nearby traffic and enabled them, when specific criteria were met, to request an ITP flight level change referencing one or two nearby aircraft that might otherwise block the flight level change. The subject pilots subjective assessments of ITP validity and acceptability were measured via questionnaires and discussions, and their objective performance in appropriately selecting, requesting, and performing ITP flight level changes was evaluated for each simulated flight scenario. Objective performance and subjective workload assessment data from the experiment s test conditions were analyzed for statistical and operational significance and are reported in the paper. Based on these results, suggestions are made to further improve the ITP.
Eaton, Jennifer L; Mohr, David C; Hodgson, Michael J; McPhaul, Kathleen M
2018-02-01
To describe development and validation of the work-related well-being (WRWB) index. Principal components analysis was performed using Federal Employee Viewpoint Survey (FEVS) data (N = 392,752) to extract variables representing worker well-being constructs. Confirmatory factor analysis was performed to verify factor structure. To validate the WRWB index, we used multiple regression analysis to examine relationships with burnout associated outcomes. Principal Components Analysis identified three positive psychology constructs: "Work Positivity", "Co-worker Relationships", and "Work Mastery". An 11 item index explaining 63.5% of variance was achieved. The structural equation model provided a very good fit to the data. Higher WRWB scores were positively associated with all three employee experience measures examined in regression models. The new WRWB index shows promise as a valid and widely accessible instrument to assess worker well-being.
Eaton, Jennifer L; Mohr, David C; Hodgson, Michael J; McPhaul, Kathleen M
2017-10-11
To describe development and validation of the Work-Related Well-Being (WRWB) Index. Principal Components Analysis was performed using Federal Employee Viewpoint Survey (FEVS) data (N = 392,752) to extract variables representing worker well-being constructs. Confirmatory factor analysis was performed to verify factor structure. To validate the WRWB index, we used multiple regression analysis to examine relationships with burnout associated outcomes. PCA identified three positive psychology constructs: "Work Positivity", "Co-worker Relationships", and "Work Mastery". An 11 item index explaining 63.5% of variance was achieved. The structural equation model provided a very good fit to the data. Higher WRWB scores were positively associated with all 3 employee experience measures examined in regression models. The new WRWB index shows promise as a valid and widely accessible instrument to assess worker well-being.
Rater Expertise in a Second Language Speaking Assessment: The Influence of Training and Experience
ERIC Educational Resources Information Center
Davis, Lawrence Edward
2012-01-01
Speaking performance tests typically employ raters to produce scores; accordingly, variability in raters' scoring decisions has important consequences for test reliability and validity. One such source of variability is the rater's level of expertise in scoring. Therefore, it is important to understand how raters' performance is influenced by…
Cagiltay, Nergiz Ercil; Ozcelik, Erol; Sengul, Gokhan; Berker, Mustafa
2017-11-01
In neurosurgery education, there is a paradigm shift from time-based training to criterion-based model for which competency and assessment becomes very critical. Even virtual reality simulators provide alternatives to improve education and assessment in neurosurgery programs and allow for several objective assessment measures, there are not many tools for assessing the overall performance of trainees. This study aims to develop and validate a tool for assessing the overall performance of participants in a simulation-based endoneurosurgery training environment. A training program was developed in two levels: endoscopy practice and beginning surgical practice based on four scenarios. Then, three experiments were conducted with three corresponding groups of participants (Experiment 1, 45 (32 beginners, 13 experienced), Experiment 2, 53 (40 beginners, 13 experienced), and Experiment 3, 26 (14 novices, 12 intermediate) participants). The results analyzed to understand the common factors among the performance measurements of these experiments. Then, a factor capable of assessing the overall skill levels of surgical residents was extracted. Afterwards, the proposed measure was tested to estimate the experience levels of the participants. Finally, the level of realism of these educational scenarios was assessed. The factor formed by time, distance, and accuracy on simulated tasks provided an overall performance indicator. The prediction correctness was very high for the beginners than the one for experienced surgeons in Experiments 1 and 2. When non-dominant hand is used in a surgical procedure-based scenario, skill levels of surgeons can be better predicted. The results indicate that the scenarios in Experiments 1 and 2 can be used as an assessment tool for the beginners, and scenario-2 in Experiment 3 can be used as an assessment tool for intermediate and novice levels. It can be concluded that forming the balance between perceived action capacities and skills is critical for better designing and developing skill assessment surgical simulation tools.
Gawlik, Stephanie; Müller, Mitho; Hoffmann, Lutz; Dienes, Aimée; Reck, Corinna
2015-01-01
validated questionnaire assessment of fathers' experiences during childbirth is lacking in routine clinical practice. Salmon's Item List is a short, validated method used for the assessment of birth experience in mothers in both English- and German-speaking communities. With little to no validated data available for fathers, this pilot study aimed to assess the applicability of the German version of Salmon's Item List, including a multidimensional birth experience concept, in fathers. longitudinal study. Data were collected by questionnaires. University hospital in Germany. the birth experiences of 102 fathers were assessed four to six weeks post partum using the German version of Salmon's Item List. construct validity testing with exploratory factor analysis using principal component analysis with varimax rotation was performed to identify the dimensions of childbirth experiences. Internal consistency was also analysed. factor analysis yielded a four-factor solution comprising 17 items that accounted for 54.5% of the variance. The main domain was 'fulfilment', and the secondary domains were 'emotional distress', 'physical discomfort' and 'emotional adaption'. For fulfilment, Cronbach's α met conventional reliability standards (0.87). Salmon's Item List is an appropriate instrument to assess birth experience in fathers in terms of fulfilment. Larger samples need to be examined in order to prove the stability of the factor structure before this can be extended to routine clinical assessment. a reduced version of Salmon's Item List may be useful as a screening tool for general assessment. Copyright © 2014 Elsevier Ltd. All rights reserved.
Digital Fly-By-Wire Flight Control Validation Experience
NASA Technical Reports Server (NTRS)
Szalai, K. J.; Jarvis, C. R.; Krier, G. E.; Megna, V. A.; Brock, L. D.; Odonnell, R. N.
1978-01-01
The experience gained in digital fly-by-wire technology through a flight test program being conducted by the NASA Dryden Flight Research Center in an F-8C aircraft is described. The system requirements are outlined, along with the requirements for flight qualification. The system is described, including the hardware components, the aircraft installation, and the system operation. The flight qualification experience is emphasized. The qualification process included the theoretical validation of the basic design, laboratory testing of the hardware and software elements, systems level testing, and flight testing. The most productive testing was performed on an iron bird aircraft, which used the actual electronic and hydraulic hardware and a simulation of the F-8 characteristics to provide the flight environment. The iron bird was used for sensor and system redundancy management testing, failure modes and effects testing, and stress testing in many cases with the pilot in the loop. The flight test program confirmed the quality of the validation process by achieving 50 flights without a known undetected failure and with no false alarms.
Detecting cheaters without thinking: testing the automaticity of the cheater detection module.
Van Lier, Jens; Revlin, Russell; De Neys, Wim
2013-01-01
Evolutionary psychologists have suggested that our brain is composed of evolved mechanisms. One extensively studied mechanism is the cheater detection module. This module would make people very good at detecting cheaters in a social exchange. A vast amount of research has illustrated performance facilitation on social contract selection tasks. This facilitation is attributed to the alleged automatic and isolated operation of the module (i.e., independent of general cognitive capacity). This study, using the selection task, tested the critical automaticity assumption in three experiments. Experiments 1 and 2 established that performance on social contract versions did not depend on cognitive capacity or age. Experiment 3 showed that experimentally burdening cognitive resources with a secondary task had no impact on performance on the social contract version. However, in all experiments, performance on a non-social contract version did depend on available cognitive capacity. Overall, findings validate the automatic and effortless nature of social exchange reasoning.
Eriksen, Anne Haahr Mellergaard; Andersen, Rikke Fredslund; Pallisgaard, Niels; Sørensen, Flemming Brandt; Jakobsen, Anders; Hansen, Torben Frøstrup
2016-01-01
MicroRNAs (miRNAs) play important roles in regulating biological processes at the post-transcriptional level. Deregulation of miRNAs has been observed in cancer, and miRNAs are being investigated as potential biomarkers regarding diagnosis, prognosis and prediction in cancer management. Real-time quantitative polymerase chain reaction (RT-qPCR) is commonly used, when measuring miRNA expression. Appropriate normalisation of RT-qPCR data is important to ensure reliable results. The aim of the present study was to identify stably expressed miRNAs applicable as normaliser candidates in future studies of miRNA expression in rectal cancer. We performed high-throughput miRNA profiling (OpenArray®) on ten pairs of laser micro-dissected rectal cancer tissue and adjacent stroma. A global mean expression normalisation strategy was applied to identify the most stably expressed miRNAs for subsequent validation. In the first validation experiment, a panel of miRNAs were analysed on 25 pairs of micro dissected rectal cancer tissue and adjacent stroma. Subsequently, the same miRNAs were analysed in 28 pairs of rectal cancer tissue and normal rectal mucosa. From the miRNA profiling experiment, miR-645, miR-193a-5p, miR-27a and let-7g were identified as stably expressed, both in malignant and stromal tissue. In addition, NormFinder confirmed high expression stability for the four miRNAs. In the RT-qPCR based validation experiments, no significant difference between tumour and stroma/normal rectal mucosa was detected for the mean of the normaliser candidates miR-27a, miR-193a-5p and let-7g (first validation P = 0.801, second validation P = 0.321). MiR-645 was excluded from the data analysis, because it was undetected in 35 of 50 samples (first validation) and in 24 of 56 samples (second validation), respectively. Significant difference in expression level of RNU6B was observed between tumour and adjacent stromal (first validation), and between tumour and normal rectal mucosa (second validation). We recommend the mean expression of miR-27a, miR-193a-5p and let-7g as normalisation factor, when performing miRNA expression analyses by RT-qPCR on rectal cancer tissue.
NASA Astrophysics Data System (ADS)
Catanzarite, Joseph; Burke, Christopher J.; Li, Jie; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division
2016-06-01
The Kepler Mission is developing an Analytic Completeness Model (ACM) to estimate detection completeness contours as a function of exoplanet radius and period for each target star. Accurate completeness contours are necessary for robust estimation of exoplanet occurrence rates.The main components of the ACM for a target star are: detection efficiency as a function of SNR, the window function (WF) and the one-sigma depth function (OSDF). (Ref. Burke et al. 2015). The WF captures the falloff in transit detection probability at long periods that is determined by the observation window (the duration over which the target star has been observed). The OSDF is the transit depth (in parts per million) that yields SNR of unity for the full transit train. It is a function of period, and accounts for the time-varying properties of the noise and for missing or deweighted data.We are performing flux-level transit injection (FLTI) experiments on selected Kepler target stars with the goal of refining and validating the ACM. “Flux-level” injection machinery inserts exoplanet transit signatures directly into the flux time series, as opposed to “pixel-level” injection, which inserts transit signatures into the individual pixels using the pixel response function. See Jie Li's poster: ID #2493668, "Flux-level transit injection experiments with the NASA Pleiades Supercomputer" for details, including performance statistics.Since FLTI is affordable for only a small subset of the Kepler targets, the ACM is designed to apply to most Kepler target stars. We validate this model using “deep” FLTI experiments, with ~500,000 injection realizations on each of a small number of targets and “shallow” FLTI experiments with ~2000 injection realizations on each of many targets. From the results of these experiments, we identify anomalous targets, model their behavior and refine the ACM accordingly.In this presentation, we discuss progress in validating and refining the ACM, and we compare our detection efficiency curves with those derived from the associated pixel-level transit injection experiments.Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate.
Performance and evaluation of real-time multicomputer control systems
NASA Technical Reports Server (NTRS)
Shin, K. G.
1985-01-01
Three experiments on fault tolerant multiprocessors (FTMP) were begun. They are: (1) measurement of fault latency in FTMP; (2) validation and analysis of FTMP synchronization protocols; and investigation of error propagation in FTMP.
Effects of Pump-turbine S-shaped Characteristics on Transient Behaviours: Experimental Investigation
NASA Astrophysics Data System (ADS)
Zeng, Wei; Yang, Jiandong; Hu, Jinhong; Tang, Renbo
2017-05-01
A pumped storage stations model was set up and introduced in the previous paper. In the model station, the S-shaped characteristic curves was measured at the load rejection condition with the guide vanes stalling. Load rejection tests where guide-vane closed linearly were performed to validate the effect of the S-shaped characteristics on hydraulic transients. Load rejection experiments with different guide vane closing schemes were also performed to determine a suitable scheme considering the S-shaped characteristics. The condition of one pump turbine rejecting its load after another defined as one-after-another (OAA) load rejection was performed to validate the possibility of S-induced extreme draft tube pressure.
Targeting BRCAness in Gastric Cancer
2017-10-01
inhibitors. We also generated a modified CRISPR system using dCas9-KRAB expressing variants of these cells, and validated them for CRISPRi screening...Figure 2. Validation of CRISPR activity following transduction with sgRNAs targeting CD55 and FACS staining with the anti-CD55 antibody. Data shown...interpretation of CRISPR experiments Morgan Diolaiti Specialist UCSF PH.D. Experimental planning and reporting Jefferson Woods SRA UCSF B.S. Perform drug
2-D Circulation Control Airfoil Benchmark Experiments Intended for CFD Code Validation
NASA Technical Reports Server (NTRS)
Englar, Robert J.; Jones, Gregory S.; Allan, Brian G.; Lin, Johb C.
2009-01-01
A current NASA Research Announcement (NRA) project being conducted by Georgia Tech Research Institute (GTRI) personnel and NASA collaborators includes the development of Circulation Control (CC) blown airfoils to improve subsonic aircraft high-lift and cruise performance. The emphasis of this program is the development of CC active flow control concepts for both high-lift augmentation, drag control, and cruise efficiency. A collaboration in this project includes work by NASA research engineers, whereas CFD validation and flow physics experimental research are part of NASA s systematic approach to developing design and optimization tools for CC applications to fixed-wing aircraft. The design space for CESTOL type aircraft is focusing on geometries that depend on advanced flow control technologies that include Circulation Control aerodynamics. The ability to consistently predict advanced aircraft performance requires improvements in design tools to include these advanced concepts. Validation of these tools will be based on experimental methods applied to complex flows that go beyond conventional aircraft modeling techniques. This paper focuses on recent/ongoing benchmark high-lift experiments and CFD efforts intended to provide 2-D CFD validation data sets related to NASA s Cruise Efficient Short Take Off and Landing (CESTOL) study. Both the experimental data and related CFD predictions are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pugh, C.E.
2001-01-29
Numerous large-scale fracture experiments have been performed over the past thirty years to advance fracture mechanics methodologies applicable to thick-wall pressure vessels. This report first identifies major factors important to nuclear reactor pressure vessel (RPV) integrity under pressurized thermal shock (PTS) conditions. It then covers 20 key experiments that have contributed to identifying fracture behavior of RPVs and to validating applicable assessment methodologies. The experiments are categorized according to four types of specimens: (1) cylindrical specimens, (2) pressurized vessels, (3) large plate specimens, and (4) thick beam specimens. These experiments were performed in laboratories in six different countries. This reportmore » serves as a summary of those experiments, and provides a guide to references for detailed information.« less
NASA Technical Reports Server (NTRS)
Celaya, Jose R.; Saha, Sankalita; Goebel, Kai
2011-01-01
Accelerated aging methodologies for electrolytic components have been designed and accelerated aging experiments have been carried out. The methodology is based on imposing electrical and/or thermal overstresses via electrical power cycling in order to mimic the real world operation behavior. Data are collected in-situ and offline in order to periodically characterize the devices' electrical performance as it ages. The data generated through these experiments are meant to provide capability for the validation of prognostic algorithms (both model-based and data-driven). Furthermore, the data allow validation of physics-based and empirical based degradation models for this type of capacitor. A first set of models and algorithms has been designed and tested on the data.
Virtual reality simulation training in Otolaryngology.
Arora, Asit; Lau, Loretta Y M; Awad, Zaid; Darzi, Ara; Singh, Arvind; Tolley, Neil
2014-01-01
To conduct a systematic review of the validity data for the virtual reality surgical simulator platforms available in Otolaryngology. Ovid and Embase databases searched July 13, 2013. Four hundred and nine abstracts were independently reviewed by 2 authors. Thirty-six articles which fulfilled the search criteria were retrieved and viewed in full text. These articles were assessed for quantitative data on at least one aspect of face, content, construct or predictive validity. Papers were stratified by simulator, sub-specialty and further classified by the validation method used. There were 21 articles reporting applications for temporal bone surgery (n = 12), endoscopic sinus surgery (n = 6) and myringotomy (n = 3). Four different simulator platforms were validated for temporal bone surgery and two for each of the other surgical applications. Face/content validation represented the most frequent study type (9/21). Construct validation studies performed on temporal bone and endoscopic sinus surgery simulators showed that performance measures reliably discriminated between different experience levels. Simulation training improved cadaver temporal bone dissection skills and operating room performance in sinus surgery. Several simulator platforms particularly in temporal bone surgery and endoscopic sinus surgery are worthy of incorporation into training programmes. Standardised metrics are necessary to guide curriculum development in Otolaryngology. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Validating the BISON fuel performance code to integral LWR experiments
Williamson, R. L.; Gamble, K. A.; Perez, D. M.; ...
2016-03-24
BISON is a modern finite element-based nuclear fuel performance code that has been under development at the Idaho National Laboratory (INL) since 2009. The code is applicable to both steady and transient fuel behavior and has been used to analyze a variety of fuel forms in 1D spherical, 2D axisymmetric, or 3D geometries. Code validation is underway and is the subject of this study. A brief overview of BISON’s computational framework, governing equations, and general material and behavioral models is provided. BISON code and solution verification procedures are described, followed by a summary of the experimental data used to datemore » for validation of Light Water Reactor (LWR) fuel. Validation comparisons focus on fuel centerline temperature, fission gas release, and rod diameter both before and following fuel-clad mechanical contact. Comparisons for 35 LWR rods are consolidated to provide an overall view of how the code is predicting physical behavior, with a few select validation cases discussed in greater detail. Our results demonstrate that 1) fuel centerline temperature comparisons through all phases of fuel life are very reasonable with deviations between predictions and experimental data within ±10% for early life through high burnup fuel and only slightly out of these bounds for power ramp experiments, 2) accuracy in predicting fission gas release appears to be consistent with state-of-the-art modeling and with the involved uncertainties and 3) comparison of rod diameter results indicates a tendency to overpredict clad diameter reduction early in life, when clad creepdown dominates, and more significantly overpredict the diameter increase late in life, when fuel expansion controls the mechanical response. In the initial rod diameter comparisons they were unsatisfactory and have lead to consideration of additional separate effects experiments to better understand and predict clad and fuel mechanical behavior. Results from this study are being used to define priorities for ongoing code development and validation activities.« less
NASA Technical Reports Server (NTRS)
Wales, R. O. (Editor)
1981-01-01
The overall mission and spacecraft systems, testing, and operations are summarized. The mechanical subsystems are reviewed, encompassing mechanical design requirements; separation and deployment mechanisms; design and performance evaluation; and the television camera reflector monitor. Thermal control and contamination are discussed in terms of thermal control subsystems, design validation, subsystems performance, the advanced flight experiment, and the quartz-crystal microbalance contamination monitor.
An FMRI-compatible Symbol Search task.
Liebel, Spencer W; Clark, Uraina S; Xu, Xiaomeng; Riskin-Jones, Hannah H; Hawkshead, Brittany E; Schwarz, Nicolette F; Labbe, Donald; Jerskey, Beth A; Sweet, Lawrence H
2015-03-01
Our objective was to determine whether a Symbol Search paradigm developed for functional magnetic resonance imaging (FMRI) is a reliable and valid measure of cognitive processing speed (CPS) in healthy older adults. As all older adults are expected to experience cognitive declines due to aging, and CPS is one of the domains most affected by age, establishing a reliable and valid measure of CPS that can be administered inside an MR scanner may prove invaluable in future clinical and research settings. We evaluated the reliability and construct validity of a newly developed FMRI Symbol Search task by comparing participants' performance in and outside of the scanner and to the widely used and standardized Symbol Search subtest of the Wechsler Adult Intelligence Scale (WAIS). A brief battery of neuropsychological measures was also administered to assess the convergent and discriminant validity of the FMRI Symbol Search task. The FMRI Symbol Search task demonstrated high test-retest reliability when compared to performance on the same task administered out of the scanner (r=.791; p<.001). The criterion validity of the new task was supported, as it exhibited a strong positive correlation with the WAIS Symbol Search (r=.717; p<.001). Predicted convergent and discriminant validity patterns of the FMRI Symbol Search task were also observed. The FMRI Symbol Search task is a reliable and valid measure of CPS in healthy older adults and exhibits expected sensitivity to the effects of age on CPS performance.
Face validity, construct validity and training benefits of a virtual reality TURP simulator.
Bright, Elizabeth; Vine, Samuel; Wilson, Mark R; Masters, Rich S W; McGrath, John S
2012-01-01
To assess face validity, construct validity and the training benefits of a virtual reality TURP simulator. 11 novices (no TURP experience) and 7 experts (>200 TURP's) completed a virtual reality median lobe prostate resection task on the TURPsim™ (Simbionix USA Corp., Cleveland, OH). Performance indicators (percentage of prostate resected (PR), percentage of capsular resection (CR) and time diathermy loop active without tissue contact (TAWC) were recorded via the TURPsim™ and compared between novices and experts to assess construct validity. Verbal comments provided by experts following task completion were used to assess face validity. Repeated attempts of the task by the novices were analysed to assess the training benefits of the TURPsim™. Experts resected a significantly greater percentage of prostate per minute (p < 0.01) and had significantly less active diathermy time without tissue contact (p < 0.01) than novices. After practice, novices were able to perform the simulation more effectively, with significant improvement in all measured parameters. Improvement in performance was noted in novices following repetitive training, as evidenced by improved TAWC scores that were not significantly different from the expert group (p = 0.18). This study has established face and construct validity for the TURPsim™. The potential benefit in using this tool to train novices has also been demonstrated. Copyright © 2012 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Laczó, Jan; Markova, Hana; Lobellova, Veronika; Gazova, Ivana; Parizkova, Martina; Cerman, Jiri; Nekovarova, Tereza; Vales, Karel; Klovrzova, Sylva; Harrison, John; Windisch, Manfred; Vlcek, Kamil; Svoboda, Jan; Hort, Jakub; Stuchlik, Ales
2017-02-01
Development of new drugs for treatment of Alzheimer's disease (AD) requires valid paradigms for testing their efficacy and sensitive tests validated in translational research. We present validation of a place-navigation task, a Hidden Goal Task (HGT) based on the Morris water maze (MWM), in comparable animal and human protocols. We used scopolamine to model cognitive dysfunction similar to that seen in AD and donepezil, a symptomatic medication for AD, to assess its potential reversible effect on this scopolamine-induced cognitive dysfunction. We tested the effects of scopolamine and the combination of scopolamine and donepezil on place navigation and compared their effects in human and rat versions of the HGT. Place navigation testing consisted of 4 sessions of HGT performed at baseline, 2, 4, and 8 h after dosing in humans or 1, 2.5, and 5 h in rats. Scopolamine worsened performance in both animals and humans. In the animal experiment, co-administration of donepezil alleviated the negative effect of scopolamine. In the human experiment, subjects co-administered with scopolamine and donepezil performed similarly to subjects on placebo and scopolamine, indicating a partial ameliorative effect of donepezil. In the task based on the MWM, scopolamine impaired place navigation, while co-administration of donepezil alleviated this effect in comparable animal and human protocols. Using scopolamine and donepezil to challenge place navigation testing can be studied concurrently in animals and humans and may be a valid and reliable model for translational research, as well as for preclinical and clinical phases of drug trials.
Methodology and issues of integral experiments selection for nuclear data validation
NASA Astrophysics Data System (ADS)
Tatiana, Ivanova; Ivanov, Evgeny; Hill, Ian
2017-09-01
Nuclear data validation involves a large suite of Integral Experiments (IEs) for criticality, reactor physics and dosimetry applications. [1] Often benchmarks are taken from international Handbooks. [2, 3] Depending on the application, IEs have different degrees of usefulness in validation, and usually the use of a single benchmark is not advised; indeed, it may lead to erroneous interpretation and results. [1] This work aims at quantifying the importance of benchmarks used in application dependent cross section validation. The approach is based on well-known General Linear Least Squared Method (GLLSM) extended to establish biases and uncertainties for given cross sections (within a given energy interval). The statistical treatment results in a vector of weighting factors for the integral benchmarks. These factors characterize the value added by a benchmark for nuclear data validation for the given application. The methodology is illustrated by one example, selecting benchmarks for 239Pu cross section validation. The studies were performed in the framework of Subgroup 39 (Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files) established at the Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD).
NASA Technical Reports Server (NTRS)
Ku, Jentung; Ottenstein, Laura; Douglas, Donya; Hoang, Triem
2010-01-01
Under NASA s New Millennium Program Space Technology 8 (ST 8) Project, four experiments Thermal Loop, Dependable Microprocessor, SAILMAST, and UltraFlex - were conducted to advance the maturity of individual technologies from proof of concept to prototype demonstration in a relevant environment , i.e. from a technology readiness level (TRL) of 3 to a level of 6. This paper presents the new technologies and validation approach of the Thermal Loop experiment. The Thermal Loop is an advanced thermal control system consisting of a miniature loop heat pipe (MLHP) with multiple evaporators and multiple condensers designed for future small system applications requiring low mass, low power, and compactness. The MLHP retains all features of state-of-the-art loop heat pipes (LHPs) and offers additional advantages to enhance the functionality, performance, versatility, and reliability of the system. Details of the thermal loop concept, technical advances, benefits, objectives, level 1 requirements, and performance characteristics are described. Also included in the paper are descriptions of the test articles and mathematical modeling used for the technology validation. An MLHP breadboard was built and tested in the laboratory and thermal vacuum environments for TRL 4 and TRL 5 validations, and an MLHP proto-flight unit was built and tested in a thermal vacuum chamber for the TRL 6 validation. In addition, an analytical model was developed to simulate the steady state and transient behaviors of the MLHP during various validation tests. Capabilities and limitations of the analytical model are also addressed.
On the relation between personality and job performance of airline pilots.
Hormann, H J; Maschke, P
1996-01-01
The validity of a personality questionnaire for the prediction of job success of airline pilots is compared to validities of a simulator checkflight and of flying experience data. During selection, 274 pilots applying for employment with a European charter airline were examined with a multidimensional personality questionnaire (Temperature Structure Scales; TSS). Additionally, the applicants were graded in a simulator checkflight. On the basis of training records, the pilots were classified as performing at standard or below standard after about 3 years of employment in the hiring company. In a multiple-regression model, this dichotomous criterion for job success can be predicted with 73.8% accuracy through the simulator checkflight and flying experience prior to employment. By adding the personality questionnaire to the regression equation, the number of correct classifications increases to 79.3%. On average, successful pilots score substantially higher on interpersonal scales and lower on emotional scales of the TSS.
Disturbance Reduction Control Design for the ST7 Flight Validation Experiment
NASA Technical Reports Server (NTRS)
Maghami, P. G.; Hsu, O. C.; Markley, F. L.; Houghton, M. B.
2003-01-01
The Space Technology 7 experiment will perform an on-orbit system-level validation of two specific Disturbance Reduction System technologies: a gravitational reference sensor employing a free-floating test mass, and a set of micro-Newton colloidal thrusters. The ST7 Disturbance Reduction System is designed to maintain the spacecraft's position with respect to a free-floating test mass to less than 10 nm/Hz, over the frequency range of 1 to 30 mHz. This paper presents the design and analysis of the coupled, drag-free and attitude control systems that close the loop between the gravitational reference sensor and the micro-Newton thrusters, while incorporating star tracker data at low frequencies. A full 18 degree-of-freedom model, which incorporates rigid-body models of the spacecraft and two test masses, is used to evaluate the effects of actuation and measurement noise and disturbances on the performance of the drag-free system.
Galileo Attitude Determination: Experiences with a Rotating Star Scanner
NASA Technical Reports Server (NTRS)
Merken, L.; Singh, G.
1991-01-01
The Galileo experience with a rotating star scanner is discussed in terms of problems encountered in flight, solutions implemented, and lessons learned. An overview of the Galileo project and the attitude and articulation control subsystem is given and the star scanner hardware and relevant software algorithms are detailed. The star scanner is the sole source of inertial attitude reference for this spacecraft. Problem symptoms observed in flight are discussed in terms of effects on spacecraft performance and safety. Sources of thse problems include contributions from flight software idiosyncrasies and inadequate validation of the ground procedures used to identify target stars for use by the autonomous on-board star identification algorithm. Problem fixes (some already implemented and some only proposed) are discussed. A general conclusion is drawn regarding the inherent difficulty of performing simulation tests to validate algorithms which are highly sensitive to external inputs of statistically 'rare' events.
Korzeniowski, Przemyslaw; Brown, Daniel C; Sodergren, Mikael H; Barrow, Alastair; Bello, Fernando
2017-02-01
The goal of this study was to establish face, content, and construct validity of NOViSE-the first force-feedback enabled virtual reality (VR) simulator for natural orifice transluminal endoscopic surgery (NOTES). Fourteen surgeons and surgical trainees performed 3 simulated hybrid transgastric cholecystectomies using a flexible endoscope on NOViSE. Four of them were classified as "NOTES experts" who had independently performed 10 or more simulated or human NOTES procedures. Seven participants were classified as "Novices" and 3 as "Gastroenterologists" with no or minimal NOTES experience. A standardized 5-point Likert-type scale questionnaire was administered to assess the face and content validity. NOViSE showed good overall face and content validity. In 14 out of 15 statements pertaining to face validity (graphical appearance, endoscope and tissue behavior, overall realism), ≥50% of responses were "agree" or "strongly agree." In terms of content validity, 85.7% of participants agreed or strongly agreed that NOViSE is a useful training tool for NOTES and 71.4% that they would recommend it to others. Construct validity was established by comparing a number of performance metrics such as task completion times, path lengths, applied forces, and so on. NOViSE demonstrated early signs of construct validity. Experts were faster and used a shorter endoscopic path length than novices in all but one task. The results indicate that NOViSE authentically recreates a transgastric hybrid cholecystectomy and sets promising foundations for the further development of a VR training curriculum for NOTES without compromising patient safety or requiring expensive animal facilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Irminger, Philip; Starke, Michael R; Dimitrovski, Aleksandar D
2014-01-01
Power system equipment manufacturers and researchers continue to experiment with novel overhead electric conductor designs that support better conductor performance and address congestion issues. To address the technology gap in testing these novel designs, Oak Ridge National Laboratory constructed the Powerline Conductor Accelerated Testing (PCAT) facility to evaluate the performance of novel overhead conductors in an accelerated fashion in a field environment. Additionally, PCAT has the capability to test advanced sensors and measurement methods for accessing overhead conductor performance and condition. Equipped with extensive measurement and monitoring devices, PCAT provides a platform to improve/validate conductor computer models and assess themore » performance of novel conductors. The PCAT facility and its testing capabilities are described in this paper.« less
The Impact of Preceptor and Student Learning Styles on Experiential Performance Measures
Cox, Craig D.; Seifert, Charles F.
2012-01-01
Objectives. To identify preceptors’ and students’ learning styles to determine how these impact students’ performance on pharmacy practice experience assessments. Methods. Students and preceptors were asked to complete a validated Pharmacist’s Inventory of Learning Styles (PILS) questionnaire to identify dominant and secondary learning styles. The significance of “matched” and “unmatched” learning styles between students and preceptors was evaluated based on performance on both subjective and objective practice experience assessments. Results. Sixty-one percent of 67 preceptors and 57% of 72 students who participated reported “assimilator” as their dominant learning style. No differences were found between student and preceptor performance on evaluations, regardless of learning style match. Conclusion. Determination of learning styles may encourage preceptors to use teaching methods to challenge students during pharmacy practice experiences; however, this does not appear to impact student or preceptor performance. PMID:23049100
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, Brian; Jackson, R. Brian
2017-03-08
The project, Toward a Longer Life Core: Thermal Hydraulic CFD Simulations and Experimental Investigation of Deformed Fuel Assemblies, DOE Project code DE-NE0008321, was a verification and validation project for flow and heat transfer through wire wrapped simulated liquid metal fuel assemblies that included both experiments and computational fluid dynamics simulations of those experiments. This project was a two year collaboration between AREVA, TerraPower, Argonne National Laboratory and Texas A&M University. Experiments were performed by AREVA and Texas A&M University. Numerical simulations of these experiments were performed by TerraPower and Argonne National Lab. Project management was performed by AREVA Federal Services.more » The first of a kind project resulted in the production of both local point temperature measurements and local flow mixing experiment data paired with numerical simulation benchmarking of the experiments. The project experiments included the largest wire-wrapped pin assembly Mass Index of Refraction (MIR) experiment in the world, the first known wire-wrapped assembly experiment with deformed duct geometries and the largest numerical simulations ever produced for wire-wrapped bundles.« less
Sirimanna, Pramudith; Gladman, Marc A
2017-10-01
Proficiency-based virtual reality (VR) training curricula improve intraoperative performance, but have not been developed for laparoscopic appendicectomy (LA). This study aimed to develop an evidence-based training curriculum for LA. A total of 10 experienced (>50 LAs), eight intermediate (10-30 LAs) and 20 inexperienced (<10 LAs) operators performed guided and unguided LA tasks on a high-fidelity VR simulator using internationally relevant techniques. The ability to differentiate levels of experience (construct validity) was measured using simulator-derived metrics. Learning curves were analysed. Proficiency benchmarks were defined by the performance of the experienced group. Intermediate and experienced participants completed a questionnaire to evaluate the realism (face validity) and relevance (content validity). Of 18 surgeons, 16 (89%) considered the VR model to be visually realistic and 17 (95%) believed that it was representative of actual practice. All 'guided' modules demonstrated construct validity (P < 0.05), with learning curves that plateaued between sessions 6 and 9 (P < 0.01). When comparing inexperienced to intermediates to experienced, the 'unguided' LA module demonstrated construct validity for economy of motion (5.00 versus 7.17 versus 7.84, respectively; P < 0.01) and task time (864.5 s versus 477.2 s versus 352.1 s, respectively, P < 0.01). Construct validity was also confirmed for number of movements, path length and idle time. Validated modules were used for curriculum construction, with proficiency benchmarks used as performance goals. A VR LA model was realistic and representative of actual practice and was validated as a training and assessment tool. Consequently, the first evidence-based internationally applicable training curriculum for LA was constructed, which facilitates skill acquisition to proficiency. © 2017 Royal Australasian College of Surgeons.
Development and validation of a music performance anxiety inventory for gifted adolescent musicians.
Osborne, Margaret S; Kenny, Dianna T
2005-01-01
Music performance anxiety (MPA) is a distressing experience for musicians of all ages, yet the empirical investigation of MPA in adolescents has received little attention to date. No measures specifically targeting MPA in adolescents have been empirically validated. This article presents findings of an initial study into the psychometric properties and validation of the Music Performance Anxiety Inventory for Adolescents (MPAI-A), a new self-report measure of MPA for this group. Data from 381 elite young musicians aged 12-19 years was used to investigate the factor structure, internal reliability, construct and divergent validity of the MPAI-A. Cronbach's alpha for the full measure was .91. Factor analysis identified three factors, which together accounted for 53% of the variance. Construct validity was demonstrated by significant positive relationships with social phobia (measured using the Social Phobia Anxiety Inventory [Beidel, D. C., Turner, S. M., & Morris, T. L. (1995). A new inventory to assess childhood social anxiety and phobia: The Social Phobia and Anxiety Inventory for Children. Psychological Assessment, 7(1), 73-79; Beidel, D. C., Turner, S. M., & Morris, T. L. (1998). Social Phobia and Anxiety Inventory for Children (SPAI-C). North Tonawanda, NY: Multi-Health Systems Inc.]) and trait anxiety (measured using the State Trait Anxiety Inventory [Spielberger, C. D. (1983). State-Trait Anxiety Inventory STAI (Form Y). Palo Alto, CA: Consulting Psychologists Press, Inc.]). The MPAI-A demonstrated convergent validity by a moderate to strong positive correlation with an adult measure of MPA. Discriminant validity was established by a weaker positive relationship with depression, and no relationship with externalizing behavior problems. It is hoped that the MPAI-A, as the first empirically validated measure of adolescent musicians' performance anxiety, will enhance and promote phenomenological and treatment research in this area.
Statistical issues in the design and planning of proteomic profiling experiments.
Cairns, David A
2015-01-01
The statistical design of a clinical proteomics experiment is a critical part of well-undertaken investigation. Standard concepts from experimental design such as randomization, replication and blocking should be applied in all experiments, and this is possible when the experimental conditions are well understood by the investigator. The large number of proteins simultaneously considered in proteomic discovery experiments means that determining the number of required replicates to perform a powerful experiment is more complicated than in simple experiments. However, by using information about the nature of an experiment and making simple assumptions this is achievable for a variety of experiments useful for biomarker discovery and initial validation.
Markov Jump-Linear Performance Models for Recoverable Flight Control Computers
NASA Technical Reports Server (NTRS)
Zhang, Hong; Gray, W. Steven; Gonzalez, Oscar R.
2004-01-01
Single event upsets in digital flight control hardware induced by atmospheric neutrons can reduce system performance and possibly introduce a safety hazard. One method currently under investigation to help mitigate the effects of these upsets is NASA Langley s Recoverable Computer System. In this paper, a Markov jump-linear model is developed for a recoverable flight control system, which will be validated using data from future experiments with simulated and real neutron environments. The method of tracking error analysis and the plan for the experiments are also described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benoit, J. C.; Bourdot, P.; Eschbach, R.
2012-07-01
A Decay Heat (DH) experiment on the whole core of the French Sodium-Cooled Fast Reactor PHENIX has been conducted in May 2008. The measurements began an hour and a half after the shutdown of the reactor and lasted twelve days. It is one of the experiments used for the experimental validation of the depletion code DARWIN thereby confirming the excellent performance of the aforementioned code. Discrepancies between measured and calculated decay heat do not exceed 8%. (authors)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zi-Kui; Gleeson, Brian; Shang, Shunli
This project developed computational tools that can complement and support experimental efforts in order to enable discovery and more efficient development of Ni-base structural materials and coatings. The project goal was reached through an integrated computation-predictive and experimental-validation approach, including first-principles calculations, thermodynamic CALPHAD (CALculation of PHAse Diagram), and experimental investigations on compositions relevant to Ni-base superalloys and coatings in terms of oxide layer growth and microstructure stabilities. The developed description included composition ranges typical for coating alloys and, hence, allow for prediction of thermodynamic properties for these material systems. The calculation of phase compositions, phase fraction, and phase stabilities,more » which are directly related to properties such as ductility and strength, was a valuable contribution, along with the collection of computational tools that are required to meet the increasing demands for strong, ductile and environmentally-protective coatings. Specifically, a suitable thermodynamic description for the Ni-Al-Cr-Co-Si-Hf-Y system was developed for bulk alloy and coating compositions. Experiments were performed to validate and refine the thermodynamics from the CALPHAD modeling approach. Additionally, alloys produced using predictions from the current computational models were studied in terms of their oxidation performance. Finally, results obtained from experiments aided in the development of a thermodynamic modeling automation tool called ESPEI/pycalphad - for more rapid discovery and development of new materials.« less
Infrared Light Structured Sensor 3D Approach to Estimate Kidney Volume: A Validation Study.
Garisto, Juan; Bertolo, Riccardo; Dagenais, Julien; Kaouk, Jihad
2018-06-26
To validate a new procedure for the three-dimensional (3D) estimation of total renal parenchyma volume (RPV) using a structured-light infrared laser sensor. To evaluate the accuracy of the sensor for assessing renal volume, we performed three experiments. Twenty freshly excised porcine kidneys were obtained. Experiment A, the water displacement method was used to obtain a determination of the RPV after immersing every kidney into 0.9% saline. Thereafter a structured sensor (Occipital, San Francisco, CA, USA) was used to scan the kidney. Kidney sample surface was presented initially as a mesh and then imported into MeshLab (Visual Computing Lab, Pisa, Italy) software to obtain the surface volume. Experiment B, a partial excision of the kidney with measurement of the excised volume and remnant was performed. Experiment C, a renorrhaphy of the remnant kidney was performed then measured. Bias and limits of agreement (LOA) were determined using the Bland-Altman method. Reliability was assessed using the intraclass correlation coefficient (ICC). Experiment A, the sensor bias was -1.95 mL (LOA: -19.5 to 15.59, R2= 0.410) with slightly overestimating the volumes. Experiment B, remnant kidney after partial excision and excised kidney volume were measured showing a sensor bias of -0.5 mL (LOA -5.34 to 4.20, R2= 0.490) and -0.6 mL (LOA: -1.97.08 to 0.77, R2= 0.561), respectively. Experiment C, the sensor bias was -0.89 mL (LOA -12.9 to 11.1, R2= 0.888). ICC was 0.9998. The sensor is a reliable method for assessing total renal volume with high levels of accuracy. Copyright © 2018. Published by Elsevier Inc.
Replicating the Z iron opacity experiments on the NIF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perry, T. S.; Heeter, R. F.; Opachich, Y. P.
Here, X-ray opacity is a crucial factor of all radiation-hydrodynamics calculations, yet it is one of the least validated of the material properties in the simulation codes. Recent opacity experiments at the Sandia Z-machine have shown up to factors of two discrepancies between theory and experiment, casting doubt on the validity of the opacity models. Therefore, a new experimental opacity platform is being developed on the National Ignition Facility (NIF) not only to verify the Z-machine experimental results but also to extend the experiments to other temperatures and densities. The first experiments will be directed towards measuring the opacity ofmore » iron at a temperature of ~160 eV and an electron density of ~7 x 10 21 cm -3. Preliminary experiments on NIF have demonstrated the ability to create a sufficiently bright point backlighter using an imploding plastic capsule and also a hohlraum that can heat the opacity sample to the desired conditions. The first of these iron opacity experiments is expected to be performed in 2017.« less
Replicating the Z iron opacity experiments on the NIF
Perry, T. S.; Heeter, R. F.; Opachich, Y. P.; ...
2017-05-12
Here, X-ray opacity is a crucial factor of all radiation-hydrodynamics calculations, yet it is one of the least validated of the material properties in the simulation codes. Recent opacity experiments at the Sandia Z-machine have shown up to factors of two discrepancies between theory and experiment, casting doubt on the validity of the opacity models. Therefore, a new experimental opacity platform is being developed on the National Ignition Facility (NIF) not only to verify the Z-machine experimental results but also to extend the experiments to other temperatures and densities. The first experiments will be directed towards measuring the opacity ofmore » iron at a temperature of ~160 eV and an electron density of ~7 x 10 21 cm -3. Preliminary experiments on NIF have demonstrated the ability to create a sufficiently bright point backlighter using an imploding plastic capsule and also a hohlraum that can heat the opacity sample to the desired conditions. The first of these iron opacity experiments is expected to be performed in 2017.« less
NASA Technical Reports Server (NTRS)
Maghami, Peiman; O'Donnell, James, Jr.; Hsu, Oscar; Ziemer, John; Dunn, Charles
2017-01-01
The Space Technology-7 Disturbance Reduction System (DRS) is an experiment package aboard the European Space Agency (ESA) LISA Pathfinder spacecraft. LISA Pathfinder launched from Kourou, French Guiana on December 3, 2015. The DRS is tasked to validate two specific technologies: colloidal micro-Newton thrusters (CMNT) to provide low-noise control capability of the spacecraft, and drag-free control flight. This validation is performed using highly sensitive drag-free sensors, which are provided by the LISA Technology Package of the European Space Agency. The Disturbance Reduction System is required to maintain the spacecrafts position with respect to a free-floating test mass to better than 10nmHz, along its sensitive axis (axis in optical metrology). It also has a goal of limiting the residual accelerations of any of the two test masses to below 30 (1 + [f3 mHz]) fmsHz, over the frequency range of 1 to 30 mHz.This paper briefly describes the design and the expected on-orbit performance of the control system for the two modes wherein the drag-free performance requirements are verified. The on-orbit performance of these modes are then compared to the requirements, as well as to the expected performance, and discussed.
Development of a Child Abuse Checklist to Evaluate Prehospital Provider Performance.
Alphonso, Aimee; Auerbach, Marc; Bechtel, Kirsten; Bilodeau, Kyle; Gawel, Marcie; Koziel, Jeannette; Whitfill, Travis; Tiyyagura, Gunjan Kamdar
2017-01-01
To develop and provide validity evidence for a performance checklist to evaluate the child abuse screening behaviors of prehospital providers. Checklist Development: We developed the first iteration of the checklist after review of the relevant literature and on the basis of the authors' clinical experience. Next, a panel of six content experts participated in three rounds of Delphi review to reach consensus on the final checklist items. Checklist Validation: Twenty-eight emergency medical services (EMS) providers (16 EMT-Basics, 12 EMT-Paramedics) participated in a standardized simulated case of physical child abuse to an infant followed by one-on-one semi-structured qualitative interviews. Three reviewers scored the videotaped performance using the final checklist. Light's kappa and Cronbach's alpha were calculated to assess inter-rater reliability (IRR) and internal consistency, respectively. The correlation of successful child abuse screening with checklist task completion and with participant characteristics were compared using Pearson's chi squared test to gather evidence for construct validity. The Delphi review process resulted in a final checklist that included 24 items classified with trichotomous scoring (done, not done, or not applicable). The overall IRR of the three raters was 0.70 using Light's kappa, indicating substantial agreement. Internal consistency of the checklist was low, with an overall Cronbach's alpha of 0.61. Of 28 participants, only 14 (50%) successfully screened for child abuse in simulation. Participants who successfully screened for child abuse did not differ significantly from those who failed to screen in terms of training level, past experience with child abuse reporting, or self-reported confidence in detecting child abuse (all p > 0.30). Of all 24 tasks, only the task of exposing the infant significantly correlated with successful detection of child abuse (p < 0.05). We developed a child abuse checklist that demonstrated strong content validity and substantial inter-rater reliability, but successful item completion did not correlate with other markers of provider experience. The validated instrument has important potential for training, continuing education, and research for prehospital providers at all levels of training.
Sánchez, Renata; Rodríguez, Omaira; Rosciano, José; Vegas, Liumariel; Bond, Verónica; Rojas, Aram; Sanchez-Ismayel, Alexis
2016-09-01
The objective of this study is to determine the ability of the GEARS scale (Global Evaluative Assessment of Robotic Skills) to differentiate individuals with different levels of experience in robotic surgery, as a fundamental validation. This is a cross-sectional study that included three groups of individuals with different levels of experience in robotic surgery (expert, intermediate, novice) their performance were assessed by GEARS applied by two reviewers. The difference between groups was determined by Mann-Whitney test and the consistency between the reviewers was studied by Kendall W coefficient. The agreement between the reviewers of the scale GEARS was 0.96. The score was 29.8 ± 0.4 to experts, 24 ± 2.8 to intermediates and 16 ± 3 to novices, with a statistically significant difference between all of them (p < 0.05). All parameters from the scale allow discriminating between different levels of experience, with exception of the depth perception item. We conclude that the scale GEARS was able to differentiate between individuals with different levels of experience in robotic surgery and, therefore, is a validated and useful tool to evaluate surgeons in training.
The New Millennium Program: Validating Advanced Technologies for Future Space Missions
NASA Technical Reports Server (NTRS)
Minning, Charles P.; Luers, Philip
1999-01-01
This presentation reviews the activities of the New Millennium Program (NMP) in validating advanced technologies for space missions. The focus of these breakthrough technologies are to enable new capabilities to fulfill the science needs, while reducing costs of future missions. There is a broad spectrum of NMP partners, including government agencies, universities and private industry. The DS-1 was launched on October 24, 1998. Amongst the technologies validated by the NMP on DS-1 are: a Low Power Electronics Experiment, the Power Activation and Switching Module, Multi-Functional Structures. The first two of these technologies are operational and the data analysis is still ongoing. The third program is also operational, and its performance parameters have been verified. The second program, DS-2, was launched January 3 1999. It is expected to impact near Mars southern polar region on 3 December 1999. The technologies used on this mission awaiting validation are an advanced microcontroller, a power microelectronics unit, an evolved water experiment and soil thermal conductivity experiment, Lithium-Thionyl Chloride batteries, the flexible cable interconnect, aeroshell/entry system, and a compact telecom system. EO-1 on schedule for launch in December 1999 carries several technologies to be validated. Amongst these are: a Carbon-Carbon Radiator, an X-band Phased Array Antenna, a pulsed plasma thruster, a wideband advanced recorder processor, an atmospheric corrector, lightweight flexible solar arrays, Advanced Land Imager and the Hyperion instrument
Kirkman, Matthew A; Muirhead, William; Sevdalis, Nick; Nandi, Dipankar
2015-01-01
Simulation is gaining increasing interest as a method of delivering high-quality, time-effective, and safe training to neurosurgical residents. However, most current simulators are purpose-built for simulation, being relatively expensive and inaccessible to many residents. The purpose of this study was to provide the first comprehensive validity assessment of ventriculostomy performance metrics from the Medtronic StealthStation S7 Surgical Navigation System, a neuronavigational tool widely used in the clinical setting, as a training tool for simulated ventriculostomy while concomitantly reporting on stress measures. A prospective study where participants performed 6 simulated ventriculostomy attempts on a model head with StealthStation-coregistered imaging. The performance measures included distance of the ventricular catheter tip to the foramen of Monro and presence of the catheter tip in the ventricle. Data on objective and self-reported stress and workload measures were also collected. The operating rooms of the National Hospital for Neurology and Neurosurgery, Queen Square, London. A total of 31 individuals with varying levels of prior ventriculostomy experience, varying in seniority from medical student to senior resident. Performance at simulated ventriculostomy improved significantly over subsequent attempts, irrespective of previous ventriculostomy experience. Performance improved whether or not the StealthStation display monitor was used for real-time visual feedback, but performance was optimal when it was. Further, performance was inversely correlated with both objective and self-reported measures of stress (traditionally referred to as concurrent validity). Stress and workload measures were well-correlated with each other, and they also correlated with technical performance. These initial data support the use of the StealthStation as a training tool for simulated ventriculostomy, providing a safe environment for repeated practice with immediate feedback. Although the potential implications are profound for neurosurgical education and training, further research following this proof-of-concept study is required on a larger scale for full validation and proof that training translates into improved long-term simulated and patient outcomes. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Validation of OpenFoam for heavy gas dispersion applications.
Mack, A; Spruijt, M P N
2013-11-15
In the present paper heavy gas dispersion calculations were performed with OpenFoam. For a wind tunnel test case, numerical data was validated with experiments. For a full scale numerical experiment, a code to code comparison was performed with numerical results obtained from Fluent. The validation was performed in a gravity driven environment (slope), where the heavy gas induced the turbulence. For the code to code comparison, a hypothetical heavy gas release into a strongly turbulent atmospheric boundary layer including terrain effects was selected. The investigations were performed for SF6 and CO2 as heavy gases applying the standard k-ɛ turbulence model. A strong interaction of the heavy gas with the turbulence is present which results in a strong damping of the turbulence and therefore reduced heavy gas mixing. Especially this interaction, based on the buoyancy effects, was studied in order to ensure that the turbulence-buoyancy coupling is the main driver for the reduced mixing and not the global behaviour of the turbulence modelling. For both test cases, comparisons were performed between OpenFoam and Fluent solutions which were mainly in good agreement with each other. Beside steady state solutions, the time accuracy was investigated. In the low turbulence environment (wind tunnel test) which for both codes (laminar solutions) was in good agreement, also with the experimental data. The turbulent solutions of OpenFoam were in much better agreement with the experimental results than the Fluent solutions. Within the strong turbulence environment, both codes showed an excellent comparability. Copyright © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Rearden, Bradley T
The validation of neutron transport methods used in nuclear criticality safety analyses is required by consensus American National Standards Institute/American Nuclear Society (ANSI/ANS) standards. In the last decade, there has been an increased interest in correlations among critical experiments used in validation that have shared physical attributes and which impact the independence of each measurement. The statistical methods included in many of the frequently cited guidance documents on performing validation calculations incorporate the assumption that all individual measurements are independent, so little guidance is available to practitioners on the topic. Typical guidance includes recommendations to select experiments from multiple facilitiesmore » and experiment series in an attempt to minimize the impact of correlations or common-cause errors in experiments. Recent efforts have been made both to determine the magnitude of such correlations between experiments and to develop and apply methods for adjusting the bias and bias uncertainty to account for the correlations. This paper describes recent work performed at Oak Ridge National Laboratory using the Sampler sequence from the SCALE code system to develop experimental correlations using a Monte Carlo sampling technique. Sampler will be available for the first time with the release of SCALE 6.2, and a brief introduction to the methods used to calculate experiment correlations within this new sequence is presented in this paper. Techniques to utilize these correlations in the establishment of upper subcritical limits are the subject of a companion paper and will not be discussed here. Example experimental uncertainties and correlation coefficients are presented for a variety of low-enriched uranium water-moderated lattice experiments selected for use in a benchmark exercise by the Working Party on Nuclear Criticality Safety Subgroup on Uncertainty Analysis in Criticality Safety Analyses. The results include studies on the effect of fuel rod pitch on the correlations, and some observations are also made regarding difficulties in determining experimental correlations using the Monte Carlo sampling technique.« less
Validity and reliability of the robotic objective structured assessment of technical skills
Siddiqui, Nazema Y.; Galloway, Michael L.; Geller, Elizabeth J.; Green, Isabel C.; Hur, Hye-Chun; Langston, Kyle; Pitter, Michael C.; Tarr, Megan E.; Martino, Martin A.
2015-01-01
Objective Objective structured assessments of technical skills (OSATS) have been developed to measure the skill of surgical trainees. Our aim was to develop an OSATS specifically for trainees learning robotic surgery. Study Design This is a multi-institutional study in eight academic training programs. We created an assessment form to evaluate robotic surgical skill through five inanimate exercises. Obstetrics/gynecology, general surgery, and urology residents, fellows, and faculty completed five robotic exercises on a standard training model. Study sessions were recorded and randomly assigned to three blinded judges who scored performance using the assessment form. Construct validity was evaluated by comparing scores between participants with different levels of surgical experience; inter- and intra-rater reliability were also assessed. Results We evaluated 83 residents, 9 fellows, and 13 faculty, totaling 105 participants; 88 (84%) were from obstetrics/gynecology. Our assessment form demonstrated construct validity, with faculty and fellows performing significantly better than residents (mean scores: 89 ± 8 faculty; 74 ± 17 fellows; 59 ± 22 residents, p<0.01). In addition, participants with more robotic console experience scored significantly higher than those with fewer prior console surgeries (p<0.01). R-OSATS demonstrated good inter-rater reliability across all five drills (mean Cronbach's α: 0.79 ± 0.02). Intra-rater reliability was also high (mean Spearman's correlation: 0.91 ± 0.11). Conclusions We developed an assessment form for robotic surgical skill that demonstrates construct validity, inter- and intra-rater reliability. When paired with standardized robotic skill drills this form may be useful to distinguish between levels of trainee performance. PMID:24807319
[Selection of medical students : Measurement of cognitive abilities and psychosocial competencies].
Schwibbe, Anja; Lackamp, Janina; Knorr, Mirjana; Hissbach, Johanna; Kadmon, Martina; Hampe, Wolfgang
2018-02-01
The German Constitutional Court is currently reviewing whether the actual study admission process in medicine is compatible with the constitutional right of freedom of profession, since applicants without an excellent GPA usually have to wait for seven years. If the admission system is changed, politicians would like to increase the influence of psychosocial criteria on selection as specified by the Masterplan Medizinstudium 2020.What experiences have been made with the actual selection procedures? How could Situational Judgement Tests contribute to the validity of future selection procedures to German medical schools?High school GPA is the best predictor of study performance, but is more and more under discussion due to the lack of comparability between states and schools and the growing number of applicants with top grades. Aptitude and knowledge tests, especially in the natural sciences, show incremental validity in predicting study performance. The measurement of psychosocial competencies with traditional interviews shows rather low reliability and validity. The more reliable multiple mini-interviews are superior in predicting practical study performance. Situational judgement tests (SJTs) used abroad are regarded as reliable and valid; the correlation of a German SJT piloted in Hamburg with the multiple mini-interview is cautiously encouraging.A model proposed by the Medizinischer Fakultätentag and the Bundesvertretung der Medizinstudierenden considers these results. Student selection is proposed to be based on a combination of high school GPA (40%) and a cognitive test (40%) as well as an SJT (10%) and job experience (10%). Furthermore, the faculties still have the option to carry out specific selection procedures.
Bos, Nanne; Sturms, Leontien M; Stellato, Rebecca K; Schrijvers, Augustinus J P; van Stel, Henk F
2015-10-01
Patients' experiences are an indicator of health-care performance in the accident and emergency department (A&E). The Consumer Quality Index for the Accident and Emergency department (CQI A&E), a questionnaire to assess the quality of care as experienced by patients, was investigated. The internal consistency, construct validity and discriminative capacity of the questionnaire were examined. In the Netherlands, twenty-one A&Es participated in a cross-sectional survey, covering 4883 patients. The questionnaire consisted of 78 questions. Principal components analysis determined underlying domains. Internal consistency was determined by Cronbach's alpha coefficients, construct validity by Pearson's correlation coefficients and the discriminative capacity by intraclass correlation coefficients and reliability of A&E-level mean scores (G-coefficient). Seven quality domains emerged from the principal components analysis: information before treatment, timeliness, attitude of health-care professionals, professionalism of received care, information during treatment, environment and facilities, and discharge management. Domains were internally consistent (range: 0.67-0.84). Five domains and the 'global quality rating' had the capacity to discriminate among A&Es (significant intraclass correlation coefficient). Four domains and the 'global quality rating' were close to or above the threshold for reliably demonstrating differences among A&Es. The patients' experiences score on the domain timeliness showed the largest range between the worst- and best-performing A&E. The CQI A&E is a validated survey to measure health-care performance in the A&E from patients' perspective. Five domains regarding quality of care aspects and the 'global quality rating' had the capacity to discriminate among A&Es. © 2013 John Wiley & Sons Ltd.
Validation and Continued Development of Methods for Spheromak Simulation
NASA Astrophysics Data System (ADS)
Benedett, Thomas
2017-10-01
The HIT-SI experiment has demonstrated stable sustainment of spheromaks. Determining how the underlying physics extrapolate to larger, higher-temperature regimes is of prime importance in determining the viability of the inductively-driven spheromak. It is thus prudent to develop and validate a computational model that can be used to study current results and study the effect of possible design choices on plasma behavior. An extended MHD model has shown good agreement with experimental data at 14 kHz injector operation. Efforts to extend the existing validation to a range of higher frequencies (36, 53, 68 kHz) using the PSI-Tet 3D extended MHD code will be presented, along with simulations of potential combinations of flux conserver features and helicity injector configurations and their impact on current drive performance, density control, and temperature for future SIHI experiments. Work supported by USDoE.
NASA Astrophysics Data System (ADS)
Nishida, R. T.; Beale, S. B.; Pharoah, J. G.; de Haart, L. G. J.; Blum, L.
2018-01-01
This work is among the first where the results of an extensive experimental research programme are compared to performance calculations of a comprehensive computational fluid dynamics model for a solid oxide fuel cell stack. The model, which combines electrochemical reactions with momentum, heat, and mass transport, is used to obtain results for an established industrial-scale fuel cell stack design with complex manifolds. To validate the model, comparisons with experimentally gathered voltage and temperature data are made for the Jülich Mark-F, 18-cell stack operating in a test furnace. Good agreement is obtained between the model and experiment results for cell voltages and temperature distributions, confirming the validity of the computational methodology for stack design. The transient effects during ramp up of current in the experiment may explain a lower average voltage than model predictions for the power curve.
Ensuring the validity of calculated subcritical limits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, H.K.
1977-01-01
The care taken at the Savannah River Laboratory and Plant to ensure the validity of calculated subcritical limits is described. Close attention is given to ANSI N16.1-1975, ''Validation of Calculational Methods for Nuclear Criticality Safety.'' The computer codes used for criticality safety computations, which are listed and are briefly described, have been placed in the SRL JOSHUA system to facilitate calculation and to reduce input errors. A driver module, KOKO, simplifies and standardizes input and links the codes together in various ways. For any criticality safety evaluation, correlations of the calculational methods are made with experiment to establish bias. Occasionallymore » subcritical experiments are performed expressly to provide benchmarks. Calculated subcritical limits contain an adequate but not excessive margin to allow for uncertainty in the bias. The final step in any criticality safety evaluation is the writing of a report describing the calculations and justifying the margin.« less
OSI-compatible protocols for mobile-satellite communications: The AMSS experience
NASA Technical Reports Server (NTRS)
Moher, Michael
1990-01-01
The protocol structure of the international aeronautical mobile satellite service (AMSS) is reviewed with emphasis on those aspects of protocol performance, validation, and conformance which are peculiar to mobile services. This is in part an analysis of what can be learned from the AMSS experience with protocols which is relevant to the design of other mobile satellite data networks, e.g., land mobile.
Red flags in the clinical interview may forecast invalid neuropsychological testing.
Keesler, Michael E; McClung, Kirstie; Meredith-Duliba, Tawny; Williams, Kelli; Swirsky-Sacchetti, Thomas
2017-04-01
Evaluating assessment validity is expected in neuropsychological evaluation, particularly in cases with identified secondary gain, where malingering or somatization may be present. Assessed with standalone measures and embedded indices, all within the testing portion of the examination, research on validity of self-report in the clinical interview is limited. Based on experience with litigation-involved examinees recovering from mild traumatic brain injury (mTBI), it was hypothesized that inconsistently reported date of injury (DOI) and/or loss of consciousness (LOC) might predict invalid performance on neurocognitive testing. This archival study examined cases of litigation-involved mTBI patients seen at an outpatient neuropsychological practice in Philadelphia, PA. Coded data included demographic variables, performance validity measures, and consistency between self-report and medicolegal records. A significant relationship was found between the consistency of examinees' self-report with records and their scores on performance validity testing, X 2 (1, N = 84) = 24.18, p < .01, Φ = .49. Post hoc testing revealed significant between-group differences in three of four comparisons, with medium to large effect sizes. A final post hoc analysis found significance between the number of performance validity tests (PVTs) failed and the extent to which an examinee incorrectly reported DOI r(83) = .49, p < .01. Using inconsistently reported LOC and/or DOI to predict an examinee's performance as invalid had a 75% sensitivity and a 75% specificity. Examinees whose reported DOI or LOC differs from records may be more likely to fail one or more PVTs, suggesting possible symptom exaggeration and/or under performance on cognitive testing.s.
Uncertainty Assessment of Hypersonic Aerothermodynamics Prediction Capability
NASA Technical Reports Server (NTRS)
Bose, Deepak; Brown, James L.; Prabhu, Dinesh K.; Gnoffo, Peter; Johnston, Christopher O.; Hollis, Brian
2011-01-01
The present paper provides the background of a focused effort to assess uncertainties in predictions of heat flux and pressure in hypersonic flight (airbreathing or atmospheric entry) using state-of-the-art aerothermodynamics codes. The assessment is performed for four mission relevant problems: (1) shock turbulent boundary layer interaction on a compression corner, (2) shock turbulent boundary layer interaction due a impinging shock, (3) high-mass Mars entry and aerocapture, and (4) high speed return to Earth. A validation based uncertainty assessment approach with reliance on subject matter expertise is used. A code verification exercise with code-to-code comparisons and comparisons against well established correlations is also included in this effort. A thorough review of the literature in search of validation experiments is performed, which identified a scarcity of ground based validation experiments at hypersonic conditions. In particular, a shortage of useable experimental data at flight like enthalpies and Reynolds numbers is found. The uncertainty was quantified using metrics that measured discrepancy between model predictions and experimental data. The discrepancy data is statistically analyzed and investigated for physics based trends in order to define a meaningful quantified uncertainty. The detailed uncertainty assessment of each mission relevant problem is found in the four companion papers.
In-flight results of adaptive attitude control law for a microsatellite
NASA Astrophysics Data System (ADS)
Pittet, C.; Luzi, A. R.; Peaucelle, D.; Biannic, J.-M.; Mignot, J.
2015-06-01
Because satellites usually do not experience large changes of mass, center of gravity or inertia in orbit, linear time invariant (LTI) controllers have been widely used to control their attitude. But, as the pointing requirements become more stringent and the satellite's structure more complex with large steerable and/or deployable appendices and flexible modes occurring in the control bandwidth, one unique LTI controller is no longer sufficient. One solution consists in designing several LTI controllers, one for each set point, but the switching between them is difficult to tune and validate. Another interesting solution is to use adaptive controllers, which could present at least two advantages: first, as the controller automatically and continuously adapts to the set point without changing the structure, no switching logic is needed in the software; second, performance and stability of the closed-loop system can be assessed directly on the whole flight domain. To evaluate the real benefits of adaptive control for satellites, in terms of design, validation and performances, CNES selected it as end-of-life experiment on PICARD microsatellite. This paper describes the design, validation and in-flight results of the new adaptive attitude control law, compared to nominal control law.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calderer, Antoni; Yang, Xiaolei; Angelidis, Dionysios
2015-10-30
The present project involves the development of modeling and analysis design tools for assessing offshore wind turbine technologies. The computational tools developed herein are able to resolve the effects of the coupled interaction of atmospheric turbulence and ocean waves on aerodynamic performance and structural stability and reliability of offshore wind turbines and farms. Laboratory scale experiments have been carried out to derive data sets for validating the computational models.
Brand, Pierre; Boulanger, Benoît; Segonds, Patricia; Petit, Yannick; Félix, Corinne; Ménaert, Bertrand; Taira, Takunori; Ishizuki, Hideki
2009-09-01
We validated the theory of angular quasi-phase-matching (AQPM) by performing measurements of second-harmonic generation and difference-frequency generation. A nonlinear least-squares fitting of these experimental data led to refine the Sellmeier equations of 5%MgO:PPLN that are now valid over the complete transparency range of the crystal. We also showed that AQPM exhibits complementary spectral ranges and acceptances compared with birefringence phase matching.
Helicopter rotor loads using a matched asymptotic expansion technique
NASA Technical Reports Server (NTRS)
Pierce, G. A.; Vaidyanathan, A. R.
1981-01-01
The theoretical basis and computational feasibility of the Van Holten method, and its performance and range of validity by comparison with experiment and other approximate methods was examined. It is found that within the restrictions of incompressible, potential flow and the assumption of small disturbances, the method does lead to a valid description of the flow. However, the method begins to break down under conditions favoring nonlinear effects such as wake distortion and blade/rotor interaction.
Scheepers, Renée A; Lases, Lenny S S; Arah, Onyebuchi A; Heineman, Maas Jan; Lombarts, Kiki M J M H
2017-10-01
Physician work engagement is associated with better work performance and fewer medical errors; however, whether work-engaged physicians perform better from the patient perspective is unknown. Although availability of job resources (autonomy, colleague support, participation in decision making, opportunities for learning) bolster work engagement, this relationship is understudied among physicians. This study investigated associations of physician work engagement with patient care experience and job resources in an academic setting. The authors collected patient care experience evaluations, using nine validated items from the Dutch Consumer Quality index in two academic hospitals (April 2014 to April 2015). Physicians reported job resources and work engagement using, respectively, the validated Questionnaire on Experience and Evaluation of Work and the Utrecht Work Engagement Scale. The authors conducted multivariate adjusted mixed linear model and linear regression analyses. Of the 9,802 eligible patients and 238 eligible physicians, respectively, 4,573 (47%) and 185 (78%) participated. Physician work engagement was not associated with patient care experience (B = 0.01; 95% confidence interval [CI] = -0.02 to 0.03; P = .669). However, learning opportunities (B = 0.28; 95% CI = 0.05 to 0.52; P = .019) and autonomy (B = 0.31; 95% CI = 0.10 to 0.51; P = .004) were positively associated with work engagement. Higher physician work engagement did not translate into better patient care experience. Patient experience may benefit from physicians who deliver stable quality under varying levels of work engagement. From the physicians' perspective, autonomy and learning opportunities could safeguard their work engagement.
Detailed Validation Assessment of Turbine Stage Disc Cavity Rotating Flows
NASA Astrophysics Data System (ADS)
Kanjiyani, Shezan
The subject of this thesis is concerned with the amount of cooling air assigned to seal high pressure turbine rim cavities which is critical for performance as well as component life. Insufficient air leads to excessive hot annulus gas ingestion and its penetration deep into the cavity compromising disc life. Excessive purge air, adversely affects performance. Experiments on a rotating turbine stage rig which included a rotor-stator forward disc cavity were performed at Arizona State University. The turbine rig has 22 vanes and 28 blades, while the rim cavity is composed of a single-tooth rim lab seal and a rim platform overlap seal. Time-averaged static pressures were measured in the gas path and the cavity, while mainstream gas ingestion into the cavity was determined by measuring the concentration distribution of tracer gas (carbon dioxide). Additionally, particle image velocimetry (PIV) was used to measure fluid velocity inside the rim cavity between the lab seal and the overlap. The data from the experiments were compared to an 360-degree unsteady RANS (URANS) CFD simulations. Although not able to match the time-averaged test data satisfactorily, the CFD simulations brought to light the unsteadiness present in the flow during the experiment which the slower response data did not fully capture. To interrogate the validity of URANS simulations in capturing complex rotating flow physics, the scope of this work also included to validating the CFD tool by comparing its predictions against experimental LDV data in a closed rotor-stator cavity. The enclosed cavity has a stationary shroud, a rotating hub, and mass flow does not enter or exit the system. A full 360 degree numerical simulation was performed comparing Fluent LES, with URANS turbulence models. Results from these investigations point to URANS state of art under-predicting closed cavity tangential velocity by 32% to 43%, and open rim cavity effectiveness by 50% compared to test data. The goal of this thesis is to assess the validity of URANS turbulence models in more complex rotating flows, compare accuracy with LES simulations, suggest CFD settings to better simulate turbine stage mainstream/disc cavity interaction with ingestion, and recommend experimentation techniques.
NASA Astrophysics Data System (ADS)
Kitahara, Yu; Yamamoto, Yuhji; Ohno, Masao; Kuwahara, Yoshihiro; Kameda, Shuichi; Hatakeyama, Tadahiro
2018-05-01
Paleomagnetic information reconstructed from archeological materials can be utilized to estimate the archeological age of excavated relics, in addition to revealing the geomagnetic secular variation and core dynamics. The direction and intensity of the Earth's magnetic field (archeodirection and archeointensity) can be ascertained using different methods, many of which have been proposed over the past decade. Among the new experimental techniques for archeointensity estimates is the Tsunakawa-Shaw method. This study demonstrates the validity of the Tsunakawa-Shaw method to reconstruct archeointensity from samples of baked clay from archeological relics. The validity of the approach was tested by comparison with the IZZI-Thellier method. The intensity values obtained coincided at the standard deviation (1 σ) level. A total of 8 specimens for the Tsunakawa-Shaw method and 16 specimens for the IZZI-Thellier method, from 8 baked clay blocks, collected from the surface of the kiln were used in these experiments. Among them, 8 specimens (for the Tsunakawa-Shaw method) and 3 specimens (for the IZZI-Thellier method) passed a set of strict selection criteria used in the final evaluation of validity. Additionally, we performed rock magnetic experiments, mineral analysis, and paleodirection measurement to evaluate the suitability of the baked clay samples for paleointensity experiments and hence confirmed that the sample properties were ideal for performing paleointensity experiments. It is notable that the newly estimated archaomagnetic intensity values are lower than those in previous studies that used other paleointensity methods for the tenth century in Japan. [Figure not available: see fulltext.
An Experimental and Numerical Study of a Supersonic Burner for CFD Model Development
NASA Technical Reports Server (NTRS)
Magnotti, G.; Cutler, A. D.
2008-01-01
A laboratory scale supersonic burner has been developed for validation of computational fluid dynamics models. Detailed numerical simulations were performed for the flow inside the combustor, and coupled with finite element thermal analysis to obtain more accurate outflow conditions. A database of nozzle exit profiles for a wide range of conditions of interest was generated to be used as boundary conditions for simulation of the external jet, or for validation of non-intrusive measurement techniques. A set of experiments was performed to validate the numerical results. In particular, temperature measurements obtained by using an infrared camera show that the computed heat transfer was larger than the measured value. Relaminarization in the convergent part of the nozzle was found to be responsible for this discrepancy, and further numerical simulations sustained this conclusion.
Time Sharing Between Robotics and Process Control: Validating a Model of Attention Switching.
Wickens, Christopher Dow; Gutzwiller, Robert S; Vieane, Alex; Clegg, Benjamin A; Sebok, Angelia; Janes, Jess
2016-03-01
The aim of this study was to validate the strategic task overload management (STOM) model that predicts task switching when concurrence is impossible. The STOM model predicts that in overload, tasks will be switched to, to the extent that they are attractive on task attributes of high priority, interest, and salience and low difficulty. But more-difficult tasks are less likely to be switched away from once they are being performed. In Experiment 1, participants performed four tasks of the Multi-Attribute Task Battery and provided task-switching data to inform the role of difficulty and priority. In Experiment 2, participants concurrently performed an environmental control task and a robotic arm simulation. Workload was varied by automation of arm movement and both the phases of environmental control and existence of decision support for fault management. Attention to the two tasks was measured using a head tracker. Experiment 1 revealed the lack of influence of task priority and confirmed the differing roles of task difficulty. In Experiment 2, the percentage attention allocation across the eight conditions was predicted by the STOM model when participants rated the four attributes. Model predictions were compared against empirical data and accounted for over 95% of variance in task allocation. More-difficult tasks were performed longer than easier tasks. Task priority does not influence allocation. The multiattribute decision model provided a good fit to the data. The STOM model is useful for predicting cognitive tunneling given that human-in-the-loop simulation is time-consuming and expensive. © 2016, Human Factors and Ergonomics Society.
Validation of a novel virtual reality simulator for robotic surgery.
Schreuder, Henk W R; Persson, Jan E U; Wolswijk, Richard G H; Ihse, Ingmar; Schijven, Marlies P; Verheijen, René H M
2014-01-01
With the increase in robotic-assisted laparoscopic surgery there is a concomitant rising demand for training methods. The objective was to establish face and construct validity of a novel virtual reality simulator (dV-Trainer, Mimic Technologies, Seattle, WA) for the use in training of robot-assisted surgery. A comparative cohort study was performed. Participants (n = 42) were divided into three groups according to their robotic experience. To determine construct validity, participants performed three different exercises twice. Performance parameters were measured. To determine face validity, participants filled in a questionnaire after completion of the exercises. Experts outperformed novices in most of the measured parameters. The most discriminative parameters were "time to complete" and "economy of motion" (P < 0.001). The training capacity of the simulator was rated 4.6 ± 0.5 SD on a 5-point Likert scale. The realism of the simulator in general, visual graphics, movements of instruments, interaction with objects, and the depth perception were all rated as being realistic. The simulator is considered to be a very useful training tool for residents and medical specialist starting with robotic surgery. Face and construct validity for the dV-Trainer could be established. The virtual reality simulator is a useful tool for training robotic surgery.
Validation of a Novel Virtual Reality Simulator for Robotic Surgery
Schreuder, Henk W. R.; Persson, Jan E. U.; Wolswijk, Richard G. H.; Ihse, Ingmar; Schijven, Marlies P.; Verheijen, René H. M.
2014-01-01
Objective. With the increase in robotic-assisted laparoscopic surgery there is a concomitant rising demand for training methods. The objective was to establish face and construct validity of a novel virtual reality simulator (dV-Trainer, Mimic Technologies, Seattle, WA) for the use in training of robot-assisted surgery. Methods. A comparative cohort study was performed. Participants (n = 42) were divided into three groups according to their robotic experience. To determine construct validity, participants performed three different exercises twice. Performance parameters were measured. To determine face validity, participants filled in a questionnaire after completion of the exercises. Results. Experts outperformed novices in most of the measured parameters. The most discriminative parameters were “time to complete” and “economy of motion” (P < 0.001). The training capacity of the simulator was rated 4.6 ± 0.5 SD on a 5-point Likert scale. The realism of the simulator in general, visual graphics, movements of instruments, interaction with objects, and the depth perception were all rated as being realistic. The simulator is considered to be a very useful training tool for residents and medical specialist starting with robotic surgery. Conclusions. Face and construct validity for the dV-Trainer could be established. The virtual reality simulator is a useful tool for training robotic surgery. PMID:24600328
Unsteady Full Annulus Simulations of a Transonic Axial Compressor Stage
NASA Technical Reports Server (NTRS)
Herrick, Gregory P.; Hathaway, Michael D.; Chen, Jen-Ping
2009-01-01
Two recent research endeavors in turbomachinery at NASA Glenn Research Center have focused on compression system stall inception and compression system aerothermodynamic performance. Physical experiment and computational research are ongoing in support of these research objectives. TURBO, an unsteady, three-dimensional, Navier-Stokes computational fluid dynamics code commissioned and developed by NASA, has been utilized, enhanced, and validated in support of these endeavors. In the research which follows, TURBO is shown to accurately capture compression system flow range-from choke to stall inception-and also to accurately calculate fundamental aerothermodynamic performance parameters. Rigorous full-annulus calculations are performed to validate TURBO s ability to simulate the unstable, unsteady, chaotic stall inception process; as part of these efforts, full-annulus calculations are also performed at a condition approaching choke to further document TURBO s capabilities to compute aerothermodynamic performance data and support a NASA code assessment effort.
External Standards or Standard Addition? Selecting and Validating a Method of Standardization
NASA Astrophysics Data System (ADS)
Harvey, David T.
2002-05-01
A common feature of many problem-based laboratories in analytical chemistry is a lengthy independent project involving the analysis of "real-world" samples. Students research the literature, adapting and developing a method suitable for their analyte, sample matrix, and problem scenario. Because these projects encompass the complete analytical process, students must consider issues such as obtaining a representative sample, selecting a method of analysis, developing a suitable standardization, validating results, and implementing appropriate quality assessment/quality control practices. Most textbooks and monographs suitable for an undergraduate course in analytical chemistry, however, provide only limited coverage of these important topics. The need for short laboratory experiments emphasizing important facets of method development, such as selecting a method of standardization, is evident. The experiment reported here, which is suitable for an introductory course in analytical chemistry, illustrates the importance of matrix effects when selecting a method of standardization. Students also learn how a spike recovery is used to validate an analytical method, and obtain a practical experience in the difference between performing an external standardization and a standard addition.
Continued Development and Validation of Methods for Spheromak Simulation
NASA Astrophysics Data System (ADS)
Benedett, Thomas
2015-11-01
The HIT-SI experiment has demonstrated stable sustainment of spheromaks; determining how the underlying physics extrapolate to larger, higher-temperature regimes is of prime importance in determining the viability of the inductively-driven spheromak. It is thus prudent to develop and validate a computational model that can be used to study current results and provide an intermediate step between theory and future experiments. A zero-beta Hall-MHD model has shown good agreement with experimental data at 14.5 kHz injector operation. Experimental observations at higher frequency, where the best performance is achieved, indicate pressure effects are important and likely required to attain quantitative agreement with simulations. Efforts to extend the existing validation to high frequency (~ 36-68 kHz) using an extended MHD model implemented in the PSI-TET arbitrary-geometry 3D MHD code will be presented. Results from verification of the PSI-TET extended MHD model using the GEM magnetic reconnection challenge will also be presented along with investigation of injector configurations for future SIHI experiments using Taylor state equilibrium calculations. Work supported by DoE.
Zig-zag tape influence in NREL Phase VI wind turbine
NASA Astrophysics Data System (ADS)
Gomez-Iradi, Sugoi; Munduate, Xabier
2014-06-01
Two bladed 10 metre diameter wind turbine was tested in the 24.4m × 36.6m NASA-Ames wind tunnel (Phase VI). These experiments have been extensively used for validation purposes for CFD and other engineering tools. The free transition case (S), has been, and is, the most employed one for validation purposes, and consist in a 3° pitch case with a rotational speed of 72rpm upwind configuration with and without yaw misalignment. However, there is another less visited case (M) where identical configuration was tested but with the inclusion of a zig-zag tape. This was called transition fixed sequence. This paper shows the differences between the free and the fix transition cases, that should be more appropriate for comparison with fully turbulent simulations. Steady k-ω SST fully turbulent computations performed with WMB CFD method are compared with the experiments showing, better predictions in the attached flow region when it is compared with the transition fixed experiments. This work wants to prove the utility of M case (transition fixed) and show its differences respect the S case (free transition) for validation purposes.
Validation and Continued Development of Methods for Spheromak Simulation
NASA Astrophysics Data System (ADS)
Benedett, Thomas
2016-10-01
The HIT-SI experiment has demonstrated stable sustainment of spheromaks. Determining how the underlying physics extrapolate to larger, higher-temperature regimes is of prime importance in determining the viability of the inductively-driven spheromak. It is thus prudent to develop and validate a computational model that can be used to study current results and study the effect of possible design choices on plasma behavior. A zero-beta Hall-MHD model has shown good agreement with experimental data at 14.5 kHz injector operation. Experimental observations at higher frequency, where the best performance is achieved, indicate pressure effects are important and likely required to attain quantitative agreement with simulations. Efforts to extend the existing validation to high frequency (36-68 kHz) using an extended MHD model implemented in the PSI-TET arbitrary-geometry 3D MHD code will be presented. An implementation of anisotropic viscosity, a feature observed to improve agreement between NIMROD simulations and experiment, will also be presented, along with investigations of flux conserver features and their impact on density control for future SIHI experiments. Work supported by DoE.
Flight Testing of an Airport Surface Guidance, Navigation, and Control System
NASA Technical Reports Server (NTRS)
Young, Steven D.; Jones, Denise R.
1998-01-01
This document describes operations associated with a set of flight experiments and demonstrations using a Boeing-757-200 (B-757) research aircraft as part of low visibility landing and surface operations (LVLASO) research activities. To support this experiment, the B-757 performed flight and taxi operations at the Hartsfield-Atlanta International Airport (ATL) in Atlanta, GA. The B-757 was equipped with experimental displays that were designed to provide flight crews with sufficient information to enable safe, expedient surface operations in any weather condition down to a runway visual range (RVR) of 300 feet. In addition to flight deck displays and supporting equipment onboard the B-757, there was also a ground-based component of the system that provided for ground controller inputs and surveillance of airport surface movements. The integrated ground and airborne components resulted in a system that has the potential to significantly improve the safety and efficiency of airport surface movements particularly as weather conditions deteriorate. Several advanced technologies were employed to show the validity of the operational concept at a major airport facility, to validate flight simulation findings, and to assess each of the individual technologies performance in an airport environment. Results show that while the maturity of some of the technologies does not permit immediate implementation, the operational concept is valid and the performance is more than adequate in many areas.
Jeong, Eun Ju; Chung, Hyun Soo; Choi, Jeong Yun; Kim, In Sook; Hong, Seong Hee; Yoo, Kyung Sook; Kim, Mi Kyoung; Won, Mi Yeol; Eum, So Yeon; Cho, Young Soon
2017-06-01
The aim of this study was to develop a simulation-based time-out learning programme targeted to nurses participating in high-risk invasive procedures and to figure out the effects of application of the new programme on acceptance of nurses. This study was performed using a simulation-based learning predesign and postdesign to figure out the effects of implementation of this programme. It was targeted to 48 registered nurses working in the general ward and the emergency department in a tertiary teaching hospital. Difference between acceptance and performance rates has been figured out by using mean, standard deviation, and Wilcoxon-signed rank test. The perception survey and score sheet have been validated through content validation index, and the reliability of evaluator has been verified by using intraclass correlation coefficient. Results showed high level of acceptance of high-risk invasive procedure (P<.01). Further, improvement was consistent regardless of clinical experience, workplace, or experience in simulation-based learning. The face validity of the programme showed over 4.0 out of 5.0. This simulation-based learning programme was effective in improving the recognition of time-out protocol and has given the participants the opportunity to become proactive in cases of high-risk invasive procedures performed outside of operating room. © 2017 John Wiley & Sons Australia, Ltd.
Two-Speed Gearbox Dynamic Simulation Predictions and Test Validation
NASA Technical Reports Server (NTRS)
Lewicki, David G.; DeSmidt, Hans; Smith, Edward C.; Bauman, Steven W.
2010-01-01
Dynamic simulations and experimental validation tests were performed on a two-stage, two-speed gearbox as part of the drive system research activities of the NASA Fundamental Aeronautics Subsonics Rotary Wing Project. The gearbox was driven by two electromagnetic motors and had two electromagnetic, multi-disk clutches to control output speed. A dynamic model of the system was created which included a direct current electric motor with proportional-integral-derivative (PID) speed control, a two-speed gearbox with dual electromagnetically actuated clutches, and an eddy current dynamometer. A six degree-of-freedom model of the gearbox accounted for the system torsional dynamics and included gear, clutch, shaft, and load inertias as well as shaft flexibilities and a dry clutch stick-slip friction model. Experimental validation tests were performed on the gearbox in the NASA Glenn gear noise test facility. Gearbox output speed and torque as well as drive motor speed and current were compared to those from the analytical predictions. The experiments correlate very well with the predictions, thus validating the dynamic simulation methodologies.
Detecting Cheaters without Thinking: Testing the Automaticity of the Cheater Detection Module
Van Lier, Jens; Revlin, Russell; De Neys, Wim
2013-01-01
Evolutionary psychologists have suggested that our brain is composed of evolved mechanisms. One extensively studied mechanism is the cheater detection module. This module would make people very good at detecting cheaters in a social exchange. A vast amount of research has illustrated performance facilitation on social contract selection tasks. This facilitation is attributed to the alleged automatic and isolated operation of the module (i.e., independent of general cognitive capacity). This study, using the selection task, tested the critical automaticity assumption in three experiments. Experiments 1 and 2 established that performance on social contract versions did not depend on cognitive capacity or age. Experiment 3 showed that experimentally burdening cognitive resources with a secondary task had no impact on performance on the social contract version. However, in all experiments, performance on a non-social contract version did depend on available cognitive capacity. Overall, findings validate the automatic and effortless nature of social exchange reasoning. PMID:23342012
CSI Flight Computer System and experimental test results
NASA Technical Reports Server (NTRS)
Sparks, Dean W., Jr.; Peri, F., Jr.; Schuler, P.
1993-01-01
This paper describes the CSI Computer System (CCS) and the experimental tests performed to validate its functionality. This system is comprised of two major components: the space flight qualified Excitation and Damping Subsystem (EDS) which performs controls calculations; and the Remote Interface Unit (RIU) which is used for data acquisition, transmission, and filtering. The flight-like RIU is the interface between the EDS and the sensors and actuators positioned on the particular structure under control. The EDS and RIU communicate over the MIL-STD-1553B, a space flight qualified bus. To test the CCS under realistic conditions, it was connected to the Phase-0 CSI Evolutionary Model (CEM) at NASA Langley Research Center. The following schematic shows how the CCS is connected to the CEM. Various tests were performed which validated the ability of the system to perform control/structures experiments.
Robotic experiment with a force reflecting handcontroller onboard MIR space station
NASA Technical Reports Server (NTRS)
Delpech, M.; Matzakis, Y.
1994-01-01
During the French CASSIOPEE mission that will fly onboard MIR space station in 1996, ergonomic evaluations of a force reflecting handcontroller will be performed on a simulated robotic task. This handcontroller is a part of the COGNILAB payload that will be used also for experiments in neurophysiology. The purpose of the robotic experiment is the validation of a new control and design concept that would enhance the task performances for telemanipulating space robots. Besides the handcontroller and its control unit, the experimental system includes a simulator of the slave robot dynamics for both free and constrained motions, a flat display screen and a seat with special fixtures for holding the astronaut.
Performance testing of a vertical Bridgman furnace using experiments and numerical modeling
NASA Astrophysics Data System (ADS)
Rosch, W. R.; Fripp, A. L.; Debnam, W. J.; Pendergrass, T. K.
1997-04-01
This paper details a portion of the work performed in preparation for the growth of lead tin telluride crystals during a Space Shuttle flight. A coordinated effort of experimental measurements and numerical modeling was completed to determine the optimum growth parameters and the performance of the furnace. This work was done using NASA's Advanced Automated Directional Solidification Furnace, but the procedures used should be equally valid for other vertical Bridgman furnaces.
The relationship between performance and flow state in tennis competition.
Koehn, S; Morris, T
2012-08-01
The study aimed to examine 1) the validity of the nine-factor flow model in tennis competition; 2) differences in flow state between athletes who won or lost their competition match; 3) the link between flow and subjective performance; and 4) flow dimensions as predictors of performance outcome The sample consisted of 188 junior tennis players (115 male, 73 female) between 12 and 18 years of age. Participants' performance was recorded during junior ranking-list tournaments. Following the completion of a tennis competition match, participants completed the Flow State Scale-2 and a subjective performance outcome measure. Acceptable flow model fit indices of CFI, TLI, SRMR, and RMSEA were only found for winning athletes. The group of winning athletes scored significantly higher on all nine flow dimensions, except time transformation, than losing athletes, showing statistically significant differences for challenge-skills balance, clear goals, sense of control, and autotelic experience. Significant correlation coefficients were found between flow state and subjective performance assessments. The binary logistic regression revealed concentration on the task and sense of control to be significant predictors of performance outcome. The predictor variables explained 13% of the variance in games won. The study showed that athletes who win or lose perceived flow state differently. Studies using retrospective assessments need to be aware that subjective experience could be biased by performance outcomes. Pinpointing psychological variables and their impact on ecologically valid measures, such as performance results, would support the development of effective intervention studies to increase performance in sport competition.
Observations with the ROWS instrument during the Grand Banks calibration/validation experiments
NASA Technical Reports Server (NTRS)
Vandemark, D.; Chapron, B.
1994-01-01
As part of a global program to validate the ocean surface sensors on board ERS-1, a joint experiment on the Grand Banks of Newfoundland was carried out in Nov. 1991. The principal objective was to provide a field validation of ERS-1 Synthetic Aperture Radar (SAR) measurement of ocean surface structure. The NASA-P3 aircraft measurements made during this experiment provide independent measurements of the ocean surface along the validation swath. The Radar Ocean Wave Spectrometer (ROWS) is a radar sensor designed to measure direction of the long wave components using spectral analysis of the tilt induced radar backscatter modulation. This technique greatly differs from SAR and thus, provides a unique set of measurements for use in evaluating SAR performance. Also, an altimeter channel in the ROWS gives simultaneous information on the surface wave height and radar mean square slope parameter. The sets of geophysical parameters (wind speed, significant wave height, directional spectrum) are used to study the SAR's ability to accurately measure ocean gravity waves. The known distortion imposed on the true directional spectrum by the SAR imaging mechanism is discussed in light of the direct comparisons between ERS-1 SAR, airborne Canadian Center for Remote Sensing (CCRS) SAR, and ROWS spectra and the use of the nonlinear ocean SAR transform.
Airborne Observations and Satellite Validation: INTEX-A Experience and INTEX-B Plans
NASA Technical Reports Server (NTRS)
Crawford, James H.; Singh, Hanwant B.; Brune, William H.; Jacob, Daniel J.
2005-01-01
Intercontinental Chemical Transport Experiment (INTEX; http://cloudl.arc.nasa.gov) is an ongoing two-phase integrated atmospheric field experiment being performed over North America (NA). Its first phase (INTEX-A) was performed in the summer of 2004 and the second phase (INTEX-B) is planned for the early spring of 2006. The main goal of INTEX-NA is to understand the transport and transformation of gases and aerosols on transcontinental/intercontinental scales and to assess their impact on air quality and climate. Central to achieving this goal is the need to relate space-based observations with those from airborne and surface platforms. During INTEX-A, NASA s DC-8 was joined by some dozen other aircraft from a large number of European and North American partners to focus on the outflow of pollution from NA to the Atlantic. Several instances of Asian pollution over NA were also encountered. INTEX-A flight planning extensively relied on satellite observations and in turn Satellite validation (Terra, Aqua, and Envisat) was given high priority. Over 20 validation profiles were successfully carried out. DC-8 sampling of smoke from Alaskan fires and formaldehyde over forested regions, and simultaneous satellite observations of these provided excellent opportunities for the interplay of these platforms. The planning for INTEX-5 is currently underway, and a vast majority of "standard" and "research" products to be retrieved from Aura instruments will be measured during INTEX-B throughout the troposphere. INTEX-B will focus on the inflow of pollution from Asia to North America and validation of satellite observations with emphasis on Aura. Several national and international partners are expected to coordinate activities with INTEX-B, and we expect its scope to expand in the coming months. An important new development involves partnership with an NSF-sponsored campaign called MIRAGE (Megacity Impacts on Regional and Global Environments- Mexico City Pollution Outflow Field Campaign; http://mirage- mex.acd.ucar.edu/index.html). This partnership will utilize both the NASA-DC-8 mad NCAR C-130 aircrafts and greatly expand the temporal and spatial coverage of these experiments. We will briefly describe our INTEX-A experience and discuss plans for INTEX-B activities especially as they relate to validation and interpretation of Aura observations.
Lobo, Daniel; Morokuma, Junji; Levin, Michael
2016-09-01
Automated computational methods can infer dynamic regulatory network models directly from temporal and spatial experimental data, such as genetic perturbations and their resultant morphologies. Recently, a computational method was able to reverse-engineer the first mechanistic model of planarian regeneration that can recapitulate the main anterior-posterior patterning experiments published in the literature. Validating this comprehensive regulatory model via novel experiments that had not yet been performed would add in our understanding of the remarkable regeneration capacity of planarian worms and demonstrate the power of this automated methodology. Using the Michigan Molecular Interactions and STRING databases and the MoCha software tool, we characterized as hnf4 an unknown regulatory gene predicted to exist by the reverse-engineered dynamic model of planarian regeneration. Then, we used the dynamic model to predict the morphological outcomes under different single and multiple knock-downs (RNA interference) of hnf4 and its predicted gene pathway interactors β-catenin and hh Interestingly, the model predicted that RNAi of hnf4 would rescue the abnormal regenerated phenotype (tailless) of RNAi of hh in amputated trunk fragments. Finally, we validated these predictions in vivo by performing the same surgical and genetic experiments with planarian worms, obtaining the same phenotypic outcomes predicted by the reverse-engineered model. These results suggest that hnf4 is a regulatory gene in planarian regeneration, validate the computational predictions of the reverse-engineered dynamic model, and demonstrate the automated methodology for the discovery of novel genes, pathways and experimental phenotypes. michael.levin@tufts.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Gerard, James M; Scalzo, Anthony J; Borgman, Matthew A; Watson, Christopher M; Byrnes, Chelsie E; Chang, Todd P; Auerbach, Marc; Kessler, David O; Feldman, Brian L; Payne, Brian S; Nibras, Sohail; Chokshi, Riti K; Lopreiato, Joseph O
2018-06-01
We developed a first-person serious game, PediatricSim, to teach and assess performances on seven critical pediatric scenarios (anaphylaxis, bronchiolitis, diabetic ketoacidosis, respiratory failure, seizure, septic shock, and supraventricular tachycardia). In the game, players are placed in the role of a code leader and direct patient management by selecting from various assessment and treatment options. The objective of this study was to obtain supportive validity evidence for the PediatricSim game scores. Game content was developed by 11 subject matter experts and followed the American Heart Association's 2011 Pediatric Advanced Life Support Provider Manual and other authoritative references. Sixty subjects with three different levels of experience were enrolled to play the game. Before game play, subjects completed a 40-item written pretest of knowledge. Game scores were compared between subject groups using scoring rubrics developed for the scenarios. Validity evidence was established and interpreted according to Messick's framework. Content validity was supported by a game development process that involved expert experience, focused literature review, and pilot testing. Subjects rated the game favorably for engagement, realism, and educational value. Interrater agreement on game scoring was excellent (intraclass correlation coefficient = 0.91, 95% confidence interval = 0.89-0.9). Game scores were higher for attendings followed by residents then medical students (Pc < 0.01) with large effect sizes (1.6-4.4) for each comparison. There was a very strong, positive correlation between game and written test scores (r = 0.84, P < 0.01). These findings contribute validity evidence for PediatricSim game scores to assess knowledge of pediatric emergency medicine resuscitation.
Validating vignette and conjoint survey experiments against real-world behavior
Hainmueller, Jens; Hangartner, Dominik; Yamamoto, Teppei
2015-01-01
Survey experiments, like vignette and conjoint analyses, are widely used in the social sciences to elicit stated preferences and study how humans make multidimensional choices. However, there is a paucity of research on the external validity of these methods that examines whether the determinants that explain hypothetical choices made by survey respondents match the determinants that explain what subjects actually do when making similar choices in real-world situations. This study compares results from conjoint and vignette analyses on which immigrant attributes generate support for naturalization with closely corresponding behavioral data from a natural experiment in Switzerland, where some municipalities used referendums to decide on the citizenship applications of foreign residents. Using a representative sample from the same population and the official descriptions of applicant characteristics that voters received before each referendum as a behavioral benchmark, we find that the effects of the applicant attributes estimated from the survey experiments perform remarkably well in recovering the effects of the same attributes in the behavioral benchmark. We also find important differences in the relative performances of the different designs. Overall, the paired conjoint design, where respondents evaluate two immigrants side by side, comes closest to the behavioral benchmark; on average, its estimates are within 2% percentage points of the effects in the behavioral benchmark. PMID:25646415
Structural Analysis and Testing of the Inflatable Re-entry Vehicle Experiment (IRVE)
NASA Technical Reports Server (NTRS)
Lindell, Michael C.; Hughes, Stephen J.; Dixon, Megan; Wiley, Cliff E.
2006-01-01
The Inflatable Re-entry Vehicle Experiment (IRVE) is a 3.0 meter, 60 degree half-angle sphere cone, inflatable aeroshell experiment designed to demonstrate various aspects of inflatable technology during Earth re-entry. IRVE will be launched on a Terrier-Improved Orion sounding rocket from NASA s Wallops Flight Facility in the fall of 2006 to an altitude of approximately 164 kilometers and re-enter the Earth s atmosphere. The experiment will demonstrate exo-atmospheric inflation, inflatable structure leak performance throughout the flight regime, structural integrity under aerodynamic pressure and associated deceleration loads, thermal protection system performance, and aerodynamic stability. Structural integrity and dynamic response of the inflatable will be monitored with photogrammetric measurements of the leeward side of the aeroshell during flight. Aerodynamic stability and drag performance will be verified with on-board inertial measurements and radar tracking from multiple ground radar stations. In addition to demonstrating inflatable technology, IRVE will help validate structural, aerothermal, and trajectory modeling and analysis techniques for the inflatable aeroshell system. This paper discusses the structural analysis and testing of the IRVE inflatable structure. Equations are presented for calculating fabric loads in sphere cone aeroshells, and finite element results are presented which validate the equations. Fabric material properties and testing are discussed along with aeroshell fabrication techniques. Stiffness and dynamics tests conducted on a small-scale development unit and a full-scale prototype unit are presented along with correlated finite element models to predict the in-flight fundamental mod
Pilot Validation Study of the European Association of Urology Robotic Training Curriculum.
Volpe, Alessandro; Ahmed, Kamran; Dasgupta, Prokar; Ficarra, Vincenzo; Novara, Giacomo; van der Poel, Henk; Mottrie, Alexandre
2015-08-01
The development of structured and validated training curricula is one of the current priorities in robot-assisted urological surgery. To establish the feasibility, acceptability, face validity, and educational impact of a structured training curriculum for robot-assisted radical prostatectomy (RARP), and to assess improvements in performance and ability to perform RARP after completion of the curriculum. A 12-wk training curriculum was developed based on an expert panel discussion and used to train ten fellows from major European teaching institutions. The curriculum included: (1) e-learning, (2) 1 wk of structured simulation-based training (virtual reality synthetic, animal, and cadaveric platforms), and (3) supervised modular training for RARP. The feasibility, acceptability, face validity, and educational impact were assessed using quantitative surveys. Improvement in the technical skills of participants over the training period was evaluated using the inbuilt validated assessment metrics on the da Vinci surgical simulator (dVSS). A final RARP performed by fellows on completion of their training was assessed using the Global Evaluative Assessment of Robotic Skills (GEARS) score and generic and procedure-specific scoring criteria. The median baseline experience of participants as console surgeon was 4 mo (interquartile range [IQR] 0-6.5 mo). All participants completed the curriculum and were involved in a median of 18 RARPs (IQR 14-36) during modular training. The overall score for dVSS tasks significantly increased over the training period (p<0.001-0.005). At the end of the curriculum, eight fellows (80%) were deemed able by their mentors to perform a RARP independently, safely, and effectively. At assessment of the final RARP, the participants achieved an average score ≥4 (scale 1-5) for all domains using the GEARS scale and an average score >10 (scale 4-16) for all procedural steps using a generic dedicated scoring tool. In performance comparison using this scoring tool, the experts significantly outperformed the fellows (mean score for all steps 13.6 vs 11). The European robot-assisted urologic training curriculum is acceptable, valid, and effective for training in RARP. This study shows that a 12-wk structured training program including simulation-based training and mentored training in the operating room allows surgeons with limited robotic experience to increase their robotic skills and their ability to perform the surgical steps of robot-assisted radical prostatectomy. Copyright © 2014 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Pumped storage system model and experimental investigations on S-induced issues during transients
NASA Astrophysics Data System (ADS)
Zeng, Wei; Yang, Jiandong; Hu, Jinhong
2017-06-01
Because of the important role of pumped storage stations in the peak regulation and frequency control of a power grid, pump turbines must rapidly switch between different operating modes, such as fast startup and load rejection. However, pump turbines go through the unstable S region in these transition processes, threatening the security and stability of the pumped storage station. This issue has mainly been investigated through numerical simulations, while field experiments generally involve high risks and are difficult to perform. Therefore, in this work, the model test method was employed to study S-induced security and stability issues for a pumped storage station in transition processes. First, a pumped storage system model was set up, including the piping system, model units, electrical control systems and measurement system. In this model, two pump turbines with different S-shaped characteristics were installed to determine the influence of S-shaped characteristics on transition processes. The model platform can be applied to simulate any hydraulic transition process that occurs in real power stations, such as load rejection, startup, and grid connection. On the experimental platform, the S-shaped characteristic curves were measured to be the basis of other experiments. Runaway experiments were performed to verify the impact of the S-shaped characteristics on the pump turbine runaway stability. Full load rejection tests were performed to validate the effect of the S-shaped characteristics on the water-hammer pressure. The condition of one pump turbine rejecting its load after another defined as one-after-another (OAA) load rejection was performed to validate the possibility of S-induced extreme draft tube pressure. Load rejection experiments with different guide vane closing schemes were performed to determine a suitable scheme to adapt the S-shaped characteristics. Through these experiments, the threats existing in the station were verified, the appropriate measures were summarized, and an important experimental basis for the safe and stable operation of a pumped storage station was provided.
Validation of Shielding Analysis Capability of SuperMC with SINBAD
NASA Astrophysics Data System (ADS)
Chen, Chaobin; Yang, Qi; Wu, Bin; Han, Yuncheng; Song, Jing
2017-09-01
Abstract: The shielding analysis capability of SuperMC was validated with the Shielding Integral Benchmark Archive Database (SINBAD). The SINBAD was compiled by RSICC and NEA, it includes numerous benchmark experiments performed with the D-T fusion neutron source facilities of OKTAVIAN, FNS, IPPE, etc. The results from SuperMC simulation were compared with experimental data and MCNP results. Very good agreement with deviation lower than 1% was achieved and it suggests that SuperMC is reliable in shielding calculation.
Modeling and Simulation of Ceramic Arrays to Improve Ballaistic Performance
2013-09-09
targets with .30cal AP M2 projectile using SPH elements. -Model validation runs were conducted based on the DoP experiments described in reference...effect of material properties on DoP 15. SUBJECT TERMS .30cal AP M2 Projectile, 762x39 PS Projectile, SPH , Aluminum 5083, SiC, DoP Expeminets...and ceramic-faced aluminum targets with „30cal AP M2 projectile using SPH elements. □ Model validation runs were conducted based on the DoP
Modeling and Simulation of Ceramic Arrays to Improve Ballaistic Performance
2013-10-01
are modeled using SPH elements. Model validation runs with monolithic SiC tiles are conducted based on the DoP experiments described in reference...TERMS ,30cal AP M2 Projectile, 762x39 PS Projectile, SPH , Aluminum 5083, SiC, DoP Expeminets, AutoDyn Simulations, Tile Gap 16. SECURITY...range 700 m/s to 1000 m/s are modeled using SPH elements. □ Model validation runs with monolithic SiC tiles are conducted based on the DoP
Validity evidence for the Simulated Colonoscopy Objective Performance Evaluation scoring system.
Trinca, Kristen D; Cox, Tiffany C; Pearl, Jonathan P; Ritter, E Matthew
2014-02-01
Low-cost, objective systems to assess and train endoscopy skills are needed. The aim of this study was to evaluate the ability of Simulated Colonoscopy Objective Performance Evaluation to assess the skills required to perform endoscopy. Thirty-eight subjects were included in this study, all of whom performed 4 tasks. The scoring system measured performance by calculating precision and efficiency. Data analysis assessed the relationship between colonoscopy experience and performance on each task and the overall score. Endoscopic trainees' Simulated Colonoscopy Objective Performance Evaluation scores correlated significantly with total colonoscopy experience (r = .61, P = .003) and experience in the past 12 months (r = .63, P = .002). Significant differences were seen among practicing endoscopists, nonendoscopic surgeons, and trainees (P < .0001). When the 4 tasks were analyzed, each showed significant correlation with colonoscopy experience (scope manipulation, r = .44, P = .044; tool targeting, r = .45, P = .04; loop management, r = .47, P = .032; mucosal inspection, r = .65, P = .001) and significant differences in performance between the endoscopist groups, except for mucosal inspection (scope manipulation, P < .0001; tool targeting, P = .002; loop management, P = .0008; mucosal inspection, P = .27). Simulated Colonoscopy Objective Performance Evaluation objectively assesses the technical skills required to perform endoscopy and shows promise as a platform for proficiency-based skills training. Published by Elsevier Inc.
ERIC Educational Resources Information Center
Lorch, Robert F., Jr.; Lorch, Elizabeth P.; Freer, Benjamin Dunham; Dunlap, Emily E.; Hodell, Emily C.; Calderhead, William J.
2014-01-01
Students (n = 1,069) from 60 4th-grade classrooms were taught the control of variables strategy (CVS) for designing experiments. Half of the classrooms were in schools that performed well on a state-mandated test of science achievement, and half were in schools that performed relatively poorly. Three teaching interventions were compared: an…
Validated simulator for space debris removal with nets and other flexible tethers applications
NASA Astrophysics Data System (ADS)
Gołębiowski, Wojciech; Michalczyk, Rafał; Dyrek, Michał; Battista, Umberto; Wormnes, Kjetil
2016-12-01
In the context of active debris removal technologies and preparation activities for the e.Deorbit mission, a simulator for net-shaped elastic bodies dynamics and their interactions with rigid bodies, has been developed. Its main application is to aid net design and test scenarios for space debris deorbitation. The simulator can model all the phases of the debris capturing process: net launch, flight and wrapping around the target. It handles coupled simulation of rigid and flexible bodies dynamics. Flexible bodies were implemented using Cosserat rods model. It allows to simulate flexible threads or wires with elasticity and damping for stretching, bending and torsion. Threads may be combined into structures of any topology, so the software is able to simulate nets, pure tethers, tether bundles, cages, trusses, etc. Full contact dynamics was implemented. Programmatic interaction with simulation is possible - i.e. for control implementation. The underlying model has been experimentally validated and due to significant gravity influence, experiment had to be performed in microgravity conditions. Validation experiment for parabolic flight was a downscaled process of Envisat capturing. The prepacked net was launched towards the satellite model, it expanded, hit the model and wrapped around it. The whole process was recorded with 2 fast stereographic camera sets for full 3D trajectory reconstruction. The trajectories were used to compare net dynamics to respective simulations and then to validate the simulation tool. The experiments were performed on board of a Falcon-20 aircraft, operated by National Research Council in Ottawa, Canada. Validation results show that model reflects phenomenon physics accurately enough, so it may be used for scenario evaluation and mission design purposes. The functionalities of the simulator are described in detail in the paper, as well as its underlying model, sample cases and methodology behind validation. Results are presented and typical use cases are discussed showing that the software may be used to design throw nets for space debris capturing, but also to simulate deorbitation process, chaser control system or general interactions between rigid and elastic bodies - all in convenient and efficient way. The presented work was led by SKA Polska under the ESA contract, within the CleanSpace initiative.
Liu, Boquan; Polce, Evan; Sprott, Julien C; Jiang, Jack J
2018-05-17
The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100 Monte Carlo experiments were applied to analyze the output of jitter, shimmer, correlation dimension, and spectrum convergence ratio. The computational output of the 4 classifiers was then plotted against signal chaos level to investigate the performance of these acoustic analysis methods under varying degrees of signal chaos. A diffusive behavior detection-based chaos level test was used to investigate the performances of different voice classification methods. Voice signals were constructed by varying the signal-to-noise ratio to establish differing signal chaos conditions. Chaos level increased sigmoidally with increasing noise power. Jitter and shimmer performed optimally when the chaos level was less than or equal to 0.01, whereas correlation dimension was capable of analyzing signals with chaos levels of less than or equal to 0.0179. Spectrum convergence ratio demonstrated proficiency in analyzing voice signals with all chaos levels investigated in this study. The results of this study corroborate the performance relationships observed in previous studies and, therefore, demonstrate the validity of the validation test method. The presented chaos level validation test could be broadly utilized to evaluate acoustic analysis methods and establish the most appropriate methodology for objective voice analysis in clinical practice.
Laser metrology and optic active control system for GAIA
NASA Astrophysics Data System (ADS)
D'Angelo, F.; Bonino, L.; Cesare, S.; Castorina, G.; Mottini, S.; Bertinetto, F.; Bisi, M.; Canuto, E.; Musso, F.
2017-11-01
The Laser Metrology and Optic Active Control (LM&OAC) program has been carried out under ESA contract with the purpose to design and validate a laser metrology system and an actuation mechanism to monitor and control at microarcsec level the stability of the Basic Angle (angle between the lines of sight of the two telescopes) of GAIA satellite. As part of the program, a breadboard (including some EQM elements) of the laser metrology and control system has been built and submitted to functional, performance and environmental tests. In the followings we describe the mission requirements, the system architecture, the breadboard design, and finally the performed validation tests. Conclusion and appraisals from this experience are also reported.
ERIC Educational Resources Information Center
Mizell, Kay
1991-01-01
Describes a study conducted at Collin County Community College to assess the writing performance of different student populations. Offers observations about writing assessment for external validity. Suggests simple procedures for quantifying writing competency. Includes a proposal for portfolio assessment. (DMM)
Al-Mamun, Mohammad; Zhu, Zhengju; Yin, Huajie; Su, Xintai; Zhang, Haimin; Liu, Porun; Yang, Huagui; Wang, Dan; Tang, Zhiyong; Wang, Yun; Zhao, Huijun
2016-08-04
A novel surface sulfur (S) doped cobalt (Co) catalyst for the oxygen evolution reaction (OER) is theoretically designed through the optimisation of the electronic structure of highly reactive surface atoms which is also validated by electrocatalytic OER experiments.
Development and validation of a high-fidelity phonomicrosurgical trainer.
Klein, Adam M; Gross, Jennifer
2017-04-01
To validate the use of a high-fidelity phonomicrosurgical trainer. A high-fidelity phonomicrosurgical trainer, based on a previously validated model by Contag et al., 1 was designed with multilayered vocal folds that more closely mimic the consistency of true vocal folds, containing intracordal lesions to practice phonomicrosurgical removal. A training module was developed to simulate the true phonomicrosurgical experience. A validation study with novice and expert surgeons was conducted. Novices and experts were instructed to remove the lesion from the synthetic vocal folds, and novices were given four training trials. Performances were measured by the amount of time spent and tissue injury (microflap, superficial, deep) to the vocal fold. An independent Student t test and Fisher exact tests were used to compare subjects. A matched-paired t test and Wilcoxon signed rank tests were used to compare novice performance on the first and fourth trials and assess for improvement. Experts completed the excision with less total errors than novices (P = .004) and made less injury to the microflap (P = .05) and superficial tissue (P = .003). Novices improved their performance with training, making less total errors (P = .002) and superficial tissue injuries (P = .02) and spending less time for removal (P = .002) after several practice trials. This high-fidelity phonomicrosurgical trainer has been validated for novice surgeons. It can distinguish between experts and novices; and after training, it helped to improve novice performance. N/A. Laryngoscope, 127:888-893, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
First validation of the PASSPORT training environment for arthroscopic skills.
Tuijthof, Gabriëlle J M; van Sterkenburg, Maayke N; Sierevelt, Inger N; van Oldenrijk, Jakob; Van Dijk, C Niek; Kerkhoffs, Gino M M J
2010-02-01
The demand for high quality care is in contrast to reduced training time for residents to develop arthroscopic skills. Thereto, simulators are introduced to train skills away from the operating room. In our clinic, a physical simulation environment to Practice Arthroscopic Surgical Skills for Perfect Operative Real-life Treatment (PASSPORT) is being developed. The PASSPORT concept consists of maintaining the normal arthroscopic equipment, replacing the human knee joint by a phantom, and integrating registration devices to provide performance feedback. The first prototype of the knee phantom allows inspection, treatment of menisci, irrigation, and limb stressing. PASSPORT was evaluated for face and construct validity. Construct validity was assessed by measuring the performance of two groups with different levels of arthroscopic experience (20 surgeons and 8 residents). Participants performed a navigation task five times on PASSPORT. Task times were recorded. Face validity was assessed by completion of a short questionnaire on the participants' impressions and comments for improvements. Construct validity was demonstrated as the surgeons (median task time 19.7 s [8.0-37.6]) were more efficient than the residents (55.2 s [27.9-96.6]) in task completion for each repetition (Mann-Whitney U test, P < 0.05). The prototype of the knee phantom sufficiently imitated limb outer appearance (79%), portal resistance (82%), and arthroscopic view (81%). Improvements are required for the stressing device and the material of cruciate ligaments. Our physical simulation environment (PASSPORT) demonstrates its potential to evolve as a training modality. In future, automated performance feedback is aimed for.
Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code
NASA Astrophysics Data System (ADS)
Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.
2015-12-01
WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).
NASA Technical Reports Server (NTRS)
Schulte, Peter Z.; Moore, James W.
2011-01-01
The Crew Exploration Vehicle Parachute Assembly System (CPAS) project conducts computer simulations to verify that flight performance requirements on parachute loads and terminal rate of descent are met. Design of Experiments (DoE) provides a systematic method for variation of simulation input parameters. When implemented and interpreted correctly, a DoE study of parachute simulation tools indicates values and combinations of parameters that may cause requirement limits to be violated. This paper describes one implementation of DoE that is currently being developed by CPAS, explains how DoE results can be interpreted, and presents the results of several preliminary studies. The potential uses of DoE to validate parachute simulation models and verify requirements are also explored.
Validation Study of Unnotched Charpy and Taylor-Anvil Impact Experiments using Kayenta
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamojjala, Krishna; Lacy, Jeffrey; Chu, Henry S.
2015-03-01
Validation of a single computational model with multiple available strain-to-failure fracture theories is presented through experimental tests and numerical simulations of the standardized unnotched Charpy and Taylor-anvil impact tests, both run using the same material model (Kayenta). Unnotched Charpy tests are performed on rolled homogeneous armor steel. The fracture patterns using Kayenta’s various failure options that include aleatory uncertainty and scale effects are compared against the experiments. Other quantities of interest include the average value of the absorbed energy and bend angle of the specimen. Taylor-anvil impact tests are performed on Ti6Al4V titanium alloy. The impact speeds of the specimenmore » are 321 m/s and 393 m/s. The goal of the numerical work is to reproduce the damage patterns observed in the laboratory. For the numerical study, the Johnson-Cook failure model is used as the ductile fracture criterion, and aleatory uncertainty is applied to rate-dependence parameters to explore its effect on the fracture patterns.« less
Towards natural language question generation for the validation of ontologies and mappings.
Ben Abacha, Asma; Dos Reis, Julio Cesar; Mrabet, Yassine; Pruski, Cédric; Da Silveira, Marcos
2016-08-08
The increasing number of open-access ontologies and their key role in several applications such as decision-support systems highlight the importance of their validation. Human expertise is crucial for the validation of ontologies from a domain point-of-view. However, the growing number of ontologies and their fast evolution over time make manual validation challenging. We propose a novel semi-automatic approach based on the generation of natural language (NL) questions to support the validation of ontologies and their evolution. The proposed approach includes the automatic generation, factorization and ordering of NL questions from medical ontologies. The final validation and correction is performed by submitting these questions to domain experts and automatically analyzing their feedback. We also propose a second approach for the validation of mappings impacted by ontology changes. The method exploits the context of the changes to propose correction alternatives presented as Multiple Choice Questions. This research provides a question optimization strategy to maximize the validation of ontology entities with a reduced number of questions. We evaluate our approach for the validation of three medical ontologies. We also evaluate the feasibility and efficiency of our mappings validation approach in the context of ontology evolution. These experiments are performed with different versions of SNOMED-CT and ICD9. The obtained experimental results suggest the feasibility and adequacy of our approach to support the validation of interconnected and evolving ontologies. Results also suggest that taking into account RDFS and OWL entailment helps reducing the number of questions and validation time. The application of our approach to validate mapping evolution also shows the difficulty of adapting mapping evolution over time and highlights the importance of semi-automatic validation.
Interpretation of Ambiguity in Individuals with Obsessive-Compulsive Symptoms
Kuckertz, Jennie M.; Amir, Nader; Tobin, Anastacia C.; Najmi, Sadia
2013-01-01
In two experiments we examined the psychometric properties of a new measure of interpretation bias in individuals with obsessive-compulsive symptoms (OCs). In Experiment 1, 38 individuals high in OC symptoms, 34 individuals high in anxiety and dysphoric symptoms, and 31 asymptomatic individuals completed the measure. Results revealed that the Word Sentence Association Test for OCD (WSAO) can differentiate those with OC symptoms from both a matched anxious/dysphoric group and a non-anxious/non-dysphoric group. In a second experiment, we tested the predictive validity of the WSAO using a performance-based behavioral approach test of contamination fears, and found that the WSAO was a better predictor of avoidance than an established measure of OC washing symptoms (Obsessive Compulsive Inventory-Revised, washing subscale). Our results provide preliminary evidence for the reliability and validity of the WSAO as well as its usefulness in predicting response to behavioral challenge above and beyond OC symptoms, depression, and anxiety. PMID:24179287
NASA Technical Reports Server (NTRS)
Komendera, Erik E.; Dorsey, John T.
2017-01-01
Developing a capability for the assembly of large space structures has the potential to increase the capabilities and performance of future space missions and spacecraft while reducing their cost. One such application is a megawatt-class solar electric propulsion (SEP) tug, representing a critical transportation ability for the NASA lunar, Mars, and solar system exploration missions. A series of robotic assembly experiments were recently completed at Langley Research Center (LaRC) that demonstrate most of the assembly steps for the SEP tug concept. The assembly experiments used a core set of robotic capabilities: long-reach manipulation and dexterous manipulation. This paper describes cross-cutting capabilities and technologies for in-space assembly (ISA), applies the ISA approach to a SEP tug, describes the design and development of two assembly demonstration concepts, and summarizes results of two sets of assembly experiments that validate the SEP tug assembly steps.
CFD Modeling of Free-Piston Stirling Engines
NASA Technical Reports Server (NTRS)
Ibrahim, Mounir B.; Zhang, Zhi-Guo; Tew, Roy C., Jr.; Gedeon, David; Simon, Terrence W.
2001-01-01
NASA Glenn Research Center (GRC) is funding Cleveland State University (CSU) to develop a reliable Computational Fluid Dynamics (CFD) code that can predict engine performance with the goal of significant improvements in accuracy when compared to one-dimensional (1-D) design code predictions. The funding also includes conducting code validation experiments at both the University of Minnesota (UMN) and CSU. In this paper a brief description of the work-in-progress is provided in the two areas (CFD and Experiments). Also, previous test results are compared with computational data obtained using (1) a 2-D CFD code obtained from Dr. Georg Scheuerer and further developed at CSU and (2) a multidimensional commercial code CFD-ACE+. The test data and computational results are for (1) a gas spring and (2) a single piston/cylinder with attached annular heat exchanger. The comparisons among the codes are discussed. The paper also discusses plans for conducting code validation experiments at CSU and UMN.
Merlin: Computer-Aided Oligonucleotide Design for Large Scale Genome Engineering with MAGE.
Quintin, Michael; Ma, Natalie J; Ahmed, Samir; Bhatia, Swapnil; Lewis, Aaron; Isaacs, Farren J; Densmore, Douglas
2016-06-17
Genome engineering technologies now enable precise manipulation of organism genotype, but can be limited in scalability by their design requirements. Here we describe Merlin ( http://merlincad.org ), an open-source web-based tool to assist biologists in designing experiments using multiplex automated genome engineering (MAGE). Merlin provides methods to generate pools of single-stranded DNA oligonucleotides (oligos) for MAGE experiments by performing free energy calculation and BLAST scoring on a sliding window spanning the targeted site. These oligos are designed not only to improve recombination efficiency, but also to minimize off-target interactions. The application further assists experiment planning by reporting predicted allelic replacement rates after multiple MAGE cycles, and enables rapid result validation by generating primer sequences for multiplexed allele-specific colony PCR. Here we describe the Merlin oligo and primer design procedures and validate their functionality compared to OptMAGE by eliminating seven AvrII restriction sites from the Escherichia coli genome.
Software reliability: Additional investigations into modeling with replicated experiments
NASA Technical Reports Server (NTRS)
Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.
1984-01-01
The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; O'Donnell, James R.; Hsu, Oscar H.; Ziemer, John K.; Dunn, Charles E.
2017-01-01
The Space Technology-7 Disturbance Reduction System (DRS) is an experiment package aboard the European Space Agency (ESA) LISA Pathfinder spacecraft. LISA Pathfinder launched from Kourou, French Guiana on December 3, 2015. The DRS is tasked to validate two specific technologies: colloidal micro-Newton thrusters (CMNT) to provide low-noise control capability of the spacecraft, and drag-free controlflight. This validation is performed using highly sensitive drag-free sensors, which are provided by the LISA Technology Package of the European Space Agency. The Disturbance Reduction System is required to maintain the spacecrafts position with respect to a free-floating test mass to better than 10nm/(square root of Hz), along its sensitive axis (axis in optical metrology). It also has a goal of limiting the residual accelerations of any of the two test masses to below 30 x 10(exp -14) (1 + ([f/3 mHz](exp 2))) m/sq s/(square root of Hz), over the frequency range of 1 to 30 mHz.This paper briefly describes the design and the expected on-orbit performance of the control system for the two modes wherein the drag-free performance requirements are verified. The on-orbit performance of these modes are then compared to the requirements, as well as to the expected performance, and discussed.
Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson; ...
2018-06-14
Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less
Design and validation of an intelligent wheelchair towards a clinically-functional outcome.
Boucher, Patrice; Atrash, Amin; Kelouwani, Sousso; Honoré, Wormser; Nguyen, Hai; Villemure, Julien; Routhier, François; Cohen, Paul; Demers, Louise; Forget, Robert; Pineau, Joelle
2013-06-17
Many people with mobility impairments, who require the use of powered wheelchairs, have difficulty completing basic maneuvering tasks during their activities of daily living (ADL). In order to provide assistance to this population, robotic and intelligent system technologies have been used to design an intelligent powered wheelchair (IPW). This paper provides a comprehensive overview of the design and validation of the IPW. The main contributions of this work are three-fold. First, we present a software architecture for robot navigation and control in constrained spaces. Second, we describe a decision-theoretic approach for achieving robust speech-based control of the intelligent wheelchair. Third, we present an evaluation protocol motivated by a meaningful clinical outcome, in the form of the Robotic Wheelchair Skills Test (RWST). This allows us to perform a thorough characterization of the performance and safety of the system, involving 17 test subjects (8 non-PW users, 9 regular PW users), 32 complete RWST sessions, 25 total hours of testing, and 9 kilometers of total running distance. User tests with the RWST show that the navigation architecture reduced collisions by more than 60% compared to other recent intelligent wheelchair platforms. On the tasks of the RWST, we measured an average decrease of 4% in performance score and 3% in safety score (not statistically significant), compared to the scores obtained with conventional driving model. This analysis was performed with regular users that had over 6 years of wheelchair driving experience, compared to approximately one half-hour of training with the autonomous mode. The platform tested in these experiments is among the most experimentally validated robotic wheelchairs in realistic contexts. The results establish that proficient powered wheelchair users can achieve the same level of performance with the intelligent command mode, as with the conventional command mode.
Development of an Integrated Nozzle for a Symmetric, RBCC Launch Vehicle Configuration
NASA Technical Reports Server (NTRS)
Smith, Timothy D.; Canabal, Francisco, III; Rice, Tharen; Blaha, Bernard
2000-01-01
The development of rocket based combined cycle (RBCC) engines is highly dependent upon integrating several different modes of operation into a single system. One of the key components to develop acceptable performance levels through each mode of operation is the nozzle. It must be highly integrated to serve the expansion processes of both rocket and air-breathing modes without undue weight, drag, or complexity. The NASA GTX configuration requires a fixed geometry, altitude-compensating nozzle configuration. The initial configuration, used mainly to estimate weight and cooling requirements was a 1 So half-angle cone, which cuts a concave surface from a point within the flowpath to the vehicle trailing edge. Results of 3-D CFD calculations on this geometry are presented. To address the critical issues associated with integrated, fixed geometry, multimode nozzle development, the GTX team has initiated a series of tasks to evolve the nozzle design, and validate performance levels. An overview of these tasks is given. The first element is a design activity to develop tools for integration of efficient expansion surfaces With the existing flowpath and vehicle aft-body, and to develop a second-generation nozzle design. A preliminary result using a "streamline-tracing" technique is presented. As the nozzle design evolves, a combination of 3-D CFD analysis and experimental evaluation will be used to validate the design procedure and determine the installed performance for propulsion cycle modeling. The initial experimental effort will consist of cold-flow experiments designed to validate the general trends of the streamline-tracing methodology and anchor the CFD analysis. Experiments will also be conducted to simulate nozzle performance during each mode of operation. As the design matures, hot-fire tests will be conducted to refine performance estimates and anchor more sophisticated reacting-flow analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson
Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less
Dimension-based attention in visual short-term memory.
Pilling, Michael; Barrett, Doug J K
2016-07-01
We investigated how dimension-based attention influences visual short-term memory (VSTM). This was done through examining the effects of cueing a feature dimension in two perceptual comparison tasks (change detection and sameness detection). In both tasks, a memory array and a test array consisting of a number of colored shapes were presented successively, interleaved by a blank interstimulus interval (ISI). In Experiment 1 (change detection), the critical event was a feature change in one item across the memory and test arrays. In Experiment 2 (sameness detection), the critical event was the absence of a feature change in one item across the two arrays. Auditory cues indicated the feature dimension (color or shape) of the critical event with 80 % validity; the cues were presented either prior to the memory array, during the ISI, or simultaneously with the test array. In Experiment 1, the cue validity influenced sensitivity only when the cue was given at the earliest position; in Experiment 2, the cue validity influenced sensitivity at all three cue positions. We attributed the greater effectiveness of top-down guidance by cues in the sameness detection task to the more active nature of the comparison process required to detect sameness events (Hyun, Woodman, Vogel, Hollingworth, & Luck, Journal of Experimental Psychology: Human Perception and Performance, 35; 1140-1160, 2009).
NASA Astrophysics Data System (ADS)
Leconte, Pierre; Bernard, David
2017-09-01
EXCALIBUR is an integral transmission experiment based on the fast neutron source produced by the bare highly enriched fast burst reactor CALIBAN, located in CEA/DAM Valduc (France). Two experimental campaigns have been performed, one using a sphere of diameter 17 cm and one using two cylinders of 17 cm diameter 9 cm height, both made of metallic Uranium 238. A set of 15 different dosimeters with specific threshold energies have been employed to provide information on the neutron flux attenuation as a function of incident energy. Measurements uncertainties are typically in the range of 0.5-3% (1σ). The analysis of these experiments is performed with the TRIPOLI4 continuous energy Monte Carlo code. A calculation benchmark with validated simplifications is defined in order to improve the statistical convergence under 2%. Various 238U evaluations have been tested: JEFF-3.1.1, ENDF/B-VII.1 and the IB36 evaluation from IAEA. A sensitivity analysis is presented to identify the contribution of each reaction cross section to the integral transmission rate. This feedback may be of interest for the international effort on 238U, through the CIELO project.
Laboratory outreach: student assessment of flow cytometer fluidics in zero gravity.
Crucian, B; Norman, J; Brentz, J; Pietrzyk, R; Sams, C
2000-10-01
Due to the the clinical utility of the flow cytometer, the National Aeronautics and Space Administration (NASA) is interested in the design of a space flight-compatible cytometer for use on long-duration space missions. Because fluid behavior is altered dramatically during space flight, it was deemed necessary to validate the principles of hydrodynamic focusing and laminar flow (cytometer fluidics) in a true microgravity environment. An experiment to validate these properties was conducted by 12 students from Sweetwater High School (Sweetwater, TX) participating in the NASA Reduced Gravity Student Flight Opportunity, Class of 2000. This program allows high school students to gain scientific experience by conducting an experiment on the NASA KC-135 zero gravity laboratory aircraft. The KC-135 creates actual zero-gravity conditions in 30-second intervals by flying a highly inclined parabolic flight path. The experiment was designed by their mentor in the program, the Johnson Space Center's flow cytometrist Brian Crucian, PhD, MT(ASCP). The students performed the experiment, with the mentor, onboard the NASA zero-gravity research aircraft in April 2000.
Detonation failure characterization of non-ideal explosives
NASA Astrophysics Data System (ADS)
Janesheski, Robert S.; Groven, Lori J.; Son, Steven
2012-03-01
Non-ideal explosives are currently poorly characterized, hence limiting the modeling of them. Current characterization requires large-scale testing to obtain steady detonation wave characterization for analysis due to the relatively thick reaction zones. Use of a microwave interferometer applied to small-scale confined transient experiments is being implemented to allow for time resolved characterization of a failing detonation. The microwave interferometer measures the position of a failing detonation wave in a tube that is initiated with a booster charge. Experiments have been performed with ammonium nitrate and various fuel compositions (diesel fuel and mineral oil). It was observed that the failure dynamics are influenced by factors such as chemical composition and confiner thickness. Future work is planned to calibrate models to these small-scale experiments and eventually validate the models with available large scale experiments. This experiment is shown to be repeatable, shows dependence on reactive properties, and can be performed with little required material.
BIM LAU-PE: Seedlings in Microgravity
NASA Astrophysics Data System (ADS)
Gass, S.; Pennese, R.; Chapuis, D.; Dainesi, P.; Nebuloni, S.; Garcia, M.; Oriol, A.
2015-09-01
The effect of gravity on plant roots is an intensive subject of research. Sounding rockets represent a costeffective platform to study this effect under microgravity conditions. As part of the upcoming MASER 13 sounding rocket campaign, two experiments on Arabidopsis thaliana seedlings have been devised: GRAMAT and SPARC. These experiments are aimed at studying (1) the genes that are specifically switched on or off during microgravity, and (2) the position of auxin-transporting proteins during microgravity. To perform these experiments, RUAG Space Switzerland site of Nyon, in collaboration with the Swedish Space Corporation (SSC) and the University of Freiburg, has developed the BIM LAU-PE (Biolology In Microgravity Late Access Unit Plant Experiment). In the following an overview of the BIM LAU-PE design is presented, highlighting specific module design features and verifications performed. A particular emphasis is placed on the parabolic flight experiments, including results of the micro-g injection system validation.
Akhtar, Kashif; Sugand, Kapil; Sperrin, Matthew; Cobb, Justin; Standfield, Nigel; Gupte, Chinmay
2015-01-01
Virtual-reality (VR) simulation in orthopedic training is still in its infancy, and much of the work has been focused on arthroscopy. We evaluated the construct validity of a new VR trauma simulator for performing dynamic hip screw (DHS) fixation of a trochanteric femoral fracture. 30 volunteers were divided into 3 groups according to the number of postgraduate (PG) years and the amount of clinical experience: novice (1-4 PG years; less than 10 DHS procedures); intermediate (5-12 PG years; 10-100 procedures); expert (> 12 PG years; > 100 procedures). Each participant performed a DHS procedure and objective performance metrics were recorded. These data were analyzed with each performance metric taken as the dependent variable in 3 regression models. There were statistically significant differences in performance between groups for (1) number of attempts at guide-wire insertion, (2) total fluoroscopy time, (3) tip-apex distance, (4) probability of screw cutout, and (5) overall simulator score. The intermediate group performed the procedure most quickly, with the lowest fluoroscopy time, the lowest tip-apex distance, the lowest probability of cutout, and the highest simulator score, which correlated with their frequency of exposure to running the trauma lists for hip fracture surgery. This study demonstrates the construct validity of a haptic VR trauma simulator with surgeons undertaking the procedure most frequently performing best on the simulator. VR simulation may be a means of addressing restrictions on working hours and allows trainees to practice technical tasks without putting patients at risk. The VR DHS simulator evaluated in this study may provide valid assessment of technical skill.
NASA Astrophysics Data System (ADS)
Tanaka, Hiroshi; Hashimura, Shinji; Hiroo, Yasuaki
We present a program to learn ability to solve problems on engineering. This program is called “Experiments in creative engineering” in the department of mechanical engineering in Kurume National College of Technology advanced engineering school. In the program, students have to determine own theme and manufacture experimental devices or some machines by themselves. The students must also perform experiments to valid the function and performance of their devices by themselves. The restriction of the theme is to manufacture a device which function dose not basically exist in the world with limited cost (up to 20,000Yen) . As the results of questionnaire of students, the program would be very effective to the creative education for the students.
NASA Technical Reports Server (NTRS)
Craig, Roger A.; Davy, William C.; Whiting, Ellis E.
1994-01-01
The Radiative Heating Experiment, RHE, aboard the Aeroassist Flight Experiment, AFE, (now cancelled) was to make in-situ measurements of the stagnation region shock layer radiation during an aerobraking maneuver from geosynchronous to low earth orbit. The measurements were to provide a data base to help develop and validate aerothermodynamic computational models. Although cancelled, much work was done to develop the science requirements and to successfully meet RHE technical challenges. This paper discusses the RHE scientific objectives and expected science performance of a small sapphire window for the RHE radiometers. The spectral range required was from 170 to 900 nm. The window size was based on radiometer sensitivity requirements including capability of on-orbit solar calibration.
Learning to make Decisions: When Incentives help and Hinder
1989-06-01
environments but lenient environments are forgiving. It is assumed that incentives increase effort and attti-ion but do not have a direct effect on perfor...because of ceiling and floor effect in the former, there is little room for improvement; in the latter, little possibilit for decrements in performance. In...in exacting. These predictions are tested and validated in two experiments. A further experiment tests the effects of having subjects concentrate on
2013-01-01
experiments on liquid metal jets . The FronTier-MHD code has been used for simulations of liquid mercury targets for the proposed muon collider...validated through the comparison with experiments on liquid metal jets . The FronTier-MHD code has been used for simulations of liquid mercury targets...FronTier-MHD code have been performed using experimental and theoretical studies of liquid mercury jets in magnetic fields. Experimental studies of a
1980-09-01
used to accomplish the necessary research . One such experi- ment design and its relationship to validity will be explored next. Nonequivalent Control ...interpreting the results. The non- equivalent control group design is of the quasi-experimental variety and is widely used in educational research . As...biofeed- back research literature is the controlled group outcome study. This design has also been discussed in Chapter III in two forms as the
Effects of monetary reward and punishment on information checking behaviour: An eye-tracking study.
Li, Simon Y W; Cox, Anna L; Or, Calvin; Blandford, Ann
2018-07-01
The aim of the present study was to investigate the effect of error consequence, as reward or punishment, on individuals' checking behaviour following data entry. This study comprised two eye-tracking experiments that replicate and extend the investigation of Li et al. (2016) into the effect of monetary reward and punishment on data-entry performance. The first experiment adopted the same experimental setup as Li et al. (2016) but additionally used an eye tracker. The experiment validated Li et al. (2016) finding that, when compared to no error consequence, both reward and punishment led to improved data-entry performance in terms of reducing errors, and that no performance difference was found between reward and punishment. The second experiment extended the earlier study by associating error consequence to each individual trial by providing immediate performance feedback to participants. It was found that gradual increment (i.e. reward feedback) also led to significantly more accurate performance than no error consequence. It is unclear whether gradual increment is more effective than gradual decrement because of the small sample size tested. However, this study reasserts the effectiveness of reward on data-entry performance. Copyright © 2018 Elsevier Ltd. All rights reserved.
Bor, Jacob; Geldsetzer, Pascal; Venkataramani, Atheendar; Bärnighausen, Till
2015-01-01
Purpose of review Randomized, population-representative trials of clinical interventions are rare. Quasi-experiments have been used successfully to generate causal evidence on the cascade of HIV care in a broad range of real-world settings. Recent findings Quasi-experiments exploit exogenous, or quasi-random, variation occurring naturally in the world or because of an administrative rule or policy change to estimate causal effects. Well designed quasi-experiments have greater internal validity than typical observational research designs. At the same time, quasi-experiments may also have potential for greater external validity than experiments and can be implemented when randomized clinical trials are infeasible or unethical. Quasi-experimental studies have established the causal effects of HIV testing and initiation of antiretroviral therapy on health, economic outcomes and sexual behaviors, as well as indirect effects on other community members. Recent quasi-experiments have evaluated specific interventions to improve patient performance in the cascade of care, providing causal evidence to optimize clinical management of HIV. Summary Quasi-experiments have generated important data on the real-world impacts of HIV testing and treatment and on interventions to improve the cascade of care. With the growth in large-scale clinical and administrative data, quasi-experiments enable rigorous evaluation of policies implemented in real-world settings. PMID:26371463
Bor, Jacob; Geldsetzer, Pascal; Venkataramani, Atheendar; Bärnighausen, Till
2015-11-01
Randomized, population-representative trials of clinical interventions are rare. Quasi-experiments have been used successfully to generate causal evidence on the cascade of HIV care in a broad range of real-world settings. Quasi-experiments exploit exogenous, or quasi-random, variation occurring naturally in the world or because of an administrative rule or policy change to estimate causal effects. Well designed quasi-experiments have greater internal validity than typical observational research designs. At the same time, quasi-experiments may also have potential for greater external validity than experiments and can be implemented when randomized clinical trials are infeasible or unethical. Quasi-experimental studies have established the causal effects of HIV testing and initiation of antiretroviral therapy on health, economic outcomes and sexual behaviors, as well as indirect effects on other community members. Recent quasi-experiments have evaluated specific interventions to improve patient performance in the cascade of care, providing causal evidence to optimize clinical management of HIV. Quasi-experiments have generated important data on the real-world impacts of HIV testing and treatment and on interventions to improve the cascade of care. With the growth in large-scale clinical and administrative data, quasi-experiments enable rigorous evaluation of policies implemented in real-world settings.
A Mercury Model of Atmospheric Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Alex B.; Chodash, Perry A.; Procassini, R. J.
Using the particle transport code Mercury, accurate models were built of the two sources used in Operation BREN, a series of radiation experiments performed by the United States during the 1960s. In the future, these models will be used to validate Mercury’s ability to simulate atmospheric transport.
Inflatable Re-Entry Vehicle Experiment (IRVE) Design Overview
NASA Technical Reports Server (NTRS)
Hughes, Stephen J.; Dillman, Robert A.; Starr, Brett R.; Stephan, Ryan A.; Lindell, Michael C.; Player, Charles J.; Cheatwood, F. McNeil
2005-01-01
Inflatable aeroshells offer several advantages over traditional rigid aeroshells for atmospheric entry. Inflatables offer increased payload volume fraction of the launch vehicle shroud and the possibility to deliver more payload mass to the surface for equivalent trajectory constraints. An inflatable s diameter is not constrained by the launch vehicle shroud. The resultant larger drag area can provide deceleration equivalent to a rigid system at higher atmospheric altitudes, thus offering access to higher landing sites. When stowed for launch and cruise, inflatable aeroshells allow access to the payload after the vehicle is integrated for launch and offer direct access to vehicle structure for structural attachment with the launch vehicle. They also offer an opportunity to eliminate system duplication between the cruise stage and entry vehicle. There are however several potential technical challenges for inflatable aeroshells. First and foremost is the fact that they are flexible structures. That flexibility could lead to unpredictable drag performance or an aerostructural dynamic instability. In addition, durability of large inflatable structures may limit their application. They are susceptible to puncture, a potentially catastrophic insult, from many possible sources. Finally, aerothermal heating during planetary entry poses a significant challenge to a thin membrane. NASA Langley Research Center and NASA's Wallops Flight Facility are jointly developing inflatable aeroshell technology for use on future NASA missions. The technology will be demonstrated in the Inflatable Re-entry Vehicle Experiment (IRVE). This paper will detail the development of the initial IRVE inflatable system to be launched on a Terrier/Orion sounding rocket in the fourth quarter of CY2005. The experiment will demonstrate achievable packaging efficiency of the inflatable aeroshell for launch, inflation, leak performance of the inflatable system throughout the flight regime, structural integrity when exposed to a relevant dynamic pressure and aerodynamic stability of the inflatable system. Structural integrity and structural response of the inflatable will be verified with photogrammetric measurements of the back side of the aeroshell in flight. Aerodynamic stability as well as drag performance will be verified with on board inertial measurements and radar tracking from multiple ground radar stations. The experiment will yield valuable information about zero-g vacuum deployment dynamics of the flexible inflatable structure with both inertial and photographic measurements. In addition to demonstrating inflatable technology, IRVE will validate structural, aerothermal, and trajectory modeling techniques for the inflatable. Structural response determined from photogrammetrics will validate structural models, skin temperature measurements and additional in-depth temperature measurements will validate material thermal performance models, and on board inertial measurements along with radar tracking from multiple ground radar stations will validate trajectory simulation models.
Lambert, Carole; Gagnon, Robert; Nguyen, David; Charlin, Bernard
2009-01-01
Background The Script Concordance test (SCT) is a reliable and valid tool to evaluate clinical reasoning in complex situations where experts' opinions may be divided. Scores reflect the degree of concordance between the performance of examinees and that of a reference panel of experienced physicians. The purpose of this study is to demonstrate SCT's usefulness in radiation oncology. Methods A 90 items radiation oncology SCT was administered to 155 participants. Three levels of experience were tested: medical students (n = 70), radiation oncology residents (n = 38) and radiation oncologists (n = 47). Statistical tests were performed to assess reliability and to document validity. Results After item optimization, the test comprised 30 cases and 70 questions. Cronbach alpha was 0.90. Mean scores were 51.62 (± 8.19) for students, 71.20 (± 9.45) for residents and 76.67 (± 6.14) for radiation oncologists. The difference between the three groups was statistically significant when compared by the Kruskall-Wallis test (p < 0.001). Conclusion The SCT is reliable and useful to discriminate among participants according to their level of experience in radiation oncology. It appears as a useful tool to document the progression of reasoning during residency training. PMID:19203358
Livingstone Model-Based Diagnosis of Earth Observing One Infusion Experiment
NASA Technical Reports Server (NTRS)
Hayden, Sandra C.; Sweet, Adam J.; Christa, Scott E.
2004-01-01
The Earth Observing One satellite, launched in November 2000, is an active earth science observation platform. This paper reports on the progress of an infusion experiment in which the Livingstone 2 Model-Based Diagnostic engine is deployed on Earth Observing One, demonstrating the capability to monitor the nominal operation of the spacecraft under command of an on-board planner, and demonstrating on-board diagnosis of spacecraft failures. Design and development of the experiment, specification and validation of diagnostic scenarios, characterization of performance results and benefits of the model- based approach are presented.
NASA Technical Reports Server (NTRS)
Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John
2011-01-01
A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.
Performance of Landslide-HySEA tsunami model for NTHMP benchmarking validation process
NASA Astrophysics Data System (ADS)
Macias, Jorge
2017-04-01
In its FY2009 Strategic Plan, the NTHMP required that all numerical tsunami inundation models be verified as accurate and consistent through a model benchmarking process. This was completed in 2011, but only for seismic tsunami sources and in a limited manner for idealized solid underwater landslides. Recent work by various NTHMP states, however, has shown that landslide tsunami hazard may be dominant along significant parts of the US coastline, as compared to hazards from other tsunamigenic sources. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory date sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. The Landslide-HySEA model has participated in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017. The aim of this presentation is to show some of the numerical results obtained for Landslide-HySEA in the framework of this benchmarking validation/verification effort. Acknowledgements. This research has been partially supported by the Junta de Andalucía research project TESELA (P11-RNM7069), the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and Universidad de Málaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).
NASA Technical Reports Server (NTRS)
1991-01-01
A study was performed to determine the feasibility of conducting a flight test of the Superconducting Gravity Gradiometer (SGG) Experiment Module on one of the reflights of the European Retrievable Carrier (EURECA). EURECA was developed expressly to accommodate space science experimentation, while providing a high quality microgravity environment. As a retrievable carrier, it offers the ability to recover science experiments after a nominal six months of operations in orbit. The study concluded that the SGG Experiment Module can be accommodated and operated in a EURECA reflight mission. It was determined that such a flight test would enable the verification of the SGG Instrument flight performance and validate the design and operation of the Experiment Module. It was also concluded that a limited amount of scientific data could be obtained on this mission.
Predictive design and interpretation of colliding pulse injected laser wakefield experiments
NASA Astrophysics Data System (ADS)
Cormier-Michel, Estelle; Ranjbar, Vahid H.; Cowan, Ben M.; Bruhwiler, David L.; Geddes, Cameron G. R.; Chen, Min; Ribera, Benjamin; Esarey, Eric; Schroeder, Carl B.; Leemans, Wim P.
2010-11-01
The use of colliding laser pulses to control the injection of plasma electrons into the plasma wake of a laser plasma accelerator is a promising approach to obtaining stable, tunable electron bunches with reduced emittance and energy spread. Colliding Pulse Injection (CPI) experiments are being performed by groups around the world. We will present recent particle-in-cell simulations, using the parallel VORPAL framework, of CPI for physical parameters relevant to ongoing experiments of the LOASIS program at LBNL. We evaluate the effect of laser and plasma tuning, on the trapped electron bunch and perform parameter scans in order to optimize the quality of the bunch. Impact of non-ideal effects such as imperfect laser modes and laser self focusing are also evaluated. Simulation data are validated against current experimental results, and are used to design future experiments.
2015-03-26
Engineering and Management Air Force Institute of Technology Air University Air Education and Training Command In Partial Fulfillment of the...Human Universal Measurement and Assessment Network (HUMAN) Lab human performance experiment trials were used to train , validate and test the...calming music to ease the individual before the start of the study [8]. EEG data contains noise ranging from muscle twitches, blinking and other functions
Adapting Local Features for Face Detection in Thermal Image.
Ma, Chao; Trung, Ngo Thanh; Uchiyama, Hideaki; Nagahara, Hajime; Shimada, Atsushi; Taniguchi, Rin-Ichiro
2017-11-27
A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses). We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.
Development and validation of a low-cost mobile robotics testbed
NASA Astrophysics Data System (ADS)
Johnson, Michael; Hayes, Martin J.
2012-03-01
This paper considers the design, construction and validation of a low-cost experimental robotic testbed, which allows for the localisation and tracking of multiple robotic agents in real time. The testbed system is suitable for research and education in a range of different mobile robotic applications, for validating theoretical as well as practical research work in the field of digital control, mobile robotics, graphical programming and video tracking systems. It provides a reconfigurable floor space for mobile robotic agents to operate within, while tracking the position of multiple agents in real-time using the overhead vision system. The overall system provides a highly cost-effective solution to the topical problem of providing students with practical robotics experience within severe budget constraints. Several problems encountered in the design and development of the mobile robotic testbed and associated tracking system, such as radial lens distortion and the selection of robot identifier templates are clearly addressed. The testbed performance is quantified and several experiments involving LEGO Mindstorm NXT and Merlin System MiaBot robots are discussed.
Integration and Test Flight Validation Plans for the Pulsed Plasma Thruster Experiment on EO- 1
NASA Technical Reports Server (NTRS)
Zakrzwski, Charles; Benson, Scott; Sanneman, Paul; Hoskins, Andy; Bauer, Frank H. (Technical Monitor)
2002-01-01
The Pulsed Plasma Thruster (PPT) Experiment on the Earth Observing One (EO-1) spacecraft has been designed to demonstrate the capability of a new generation PPT to perform spacecraft attitude control. The PPT is a small, self-contained pulsed electromagnetic propulsion system capable of delivering high specific impulse (900-1200 s), very small impulse bits (10-1000 uN-s) at low average power (less than 1 to 100 W). Teflon fuel is ablated and slightly ionized by means of a capacitative discharge. The discharge also generates electromagnetic fields that accelerate the plasma by means of the Lorentz Force. EO-1 has a single PPT that can produce thrust in either the positive or negative pitch direction. The flight validation has been designed to demonstrate of the ability of the PPT to provide precision pointing accuracy, response and stability, and confirmation of benign plume and EMI effects. This paper will document the success of the flight validation.
Enhanced Missing Proteins Detection in NCI60 Cell Lines Using an Integrative Search Engine Approach.
Guruceaga, Elizabeth; Garin-Muga, Alba; Prieto, Gorka; Bejarano, Bartolomé; Marcilla, Miguel; Marín-Vicente, Consuelo; Perez-Riverol, Yasset; Casal, J Ignacio; Vizcaíno, Juan Antonio; Corrales, Fernando J; Segura, Victor
2017-12-01
The Human Proteome Project (HPP) aims deciphering the complete map of the human proteome. In the past few years, significant efforts of the HPP teams have been dedicated to the experimental detection of the missing proteins, which lack reliable mass spectrometry evidence of their existence. In this endeavor, an in depth analysis of shotgun experiments might represent a valuable resource to select a biological matrix in design validation experiments. In this work, we used all the proteomic experiments from the NCI60 cell lines and applied an integrative approach based on the results obtained from Comet, Mascot, OMSSA, and X!Tandem. This workflow benefits from the complementarity of these search engines to increase the proteome coverage. Five missing proteins C-HPP guidelines compliant were identified, although further validation is needed. Moreover, 165 missing proteins were detected with only one unique peptide, and their functional analysis supported their participation in cellular pathways as was also proposed in other studies. Finally, we performed a combined analysis of the gene expression levels and the proteomic identifications from the common cell lines between the NCI60 and the CCLE project to suggest alternatives for further validation of missing protein observations.
Enhanced Missing Proteins Detection in NCI60 Cell Lines Using an Integrative Search Engine Approach
2017-01-01
The Human Proteome Project (HPP) aims deciphering the complete map of the human proteome. In the past few years, significant efforts of the HPP teams have been dedicated to the experimental detection of the missing proteins, which lack reliable mass spectrometry evidence of their existence. In this endeavor, an in depth analysis of shotgun experiments might represent a valuable resource to select a biological matrix in design validation experiments. In this work, we used all the proteomic experiments from the NCI60 cell lines and applied an integrative approach based on the results obtained from Comet, Mascot, OMSSA, and X!Tandem. This workflow benefits from the complementarity of these search engines to increase the proteome coverage. Five missing proteins C-HPP guidelines compliant were identified, although further validation is needed. Moreover, 165 missing proteins were detected with only one unique peptide, and their functional analysis supported their participation in cellular pathways as was also proposed in other studies. Finally, we performed a combined analysis of the gene expression levels and the proteomic identifications from the common cell lines between the NCI60 and the CCLE project to suggest alternatives for further validation of missing protein observations. PMID:28960077
NASA Technical Reports Server (NTRS)
Moes, Timothy R.
2009-01-01
The principal objective of the Supersonics Project is to develop and validate multidisciplinary physics-based predictive design, analysis and optimization capabilities for supersonic vehicles. For aircraft, the focus will be on eliminating the efficiency, environmental and performance barriers to practical supersonic flight. Previous flight projects found that a shaped sonic boom could propagate all the way to the ground (F-5 SSBD experiment) and validated design tools for forebody shape modifications (F-5 SSBD and Quiet Spike experiments). The current project, Lift and Nozzle Change Effects on Tail Shock (LaNCETS) seeks to obtain flight data to develop and validate design tools for low-boom tail shock modifications. Attempts will be made to alter the shock structure of NASA's NF-15B TN/837 by changing the lift distribution by biasing the canard positions, changing the plume shape by under- and over-expanding the nozzles, and changing the plume shape using thrust vectoring. Additional efforts will measure resulting shocks with a probing aircraft (F-15B TN/836) and use the results to validate and update predictive tools. Preliminary flight results are presented and are available to provide truth data for developing and validating the CFD tools required to design low-boom supersonic aircraft.
Kawata, Ariane K; Wilson, Hilary; Ong, Siew Hwa; Kulich, Karoly; Coyne, Karin
2016-10-01
The aim of this study was to evaluate the factor structure and psychometric characteristics of the Hypoglycemia Perspectives Questionnaire (HPQ) assessing experience and perceptions of hypoglycemia in patients with type 2 diabetes mellitus (T2DM). HPQ was administered to adults with T2DM in a clinical sample from Cyprus (HYPO-Cyprus, n = 500) and a community sample in the United States (US, n = 1257) from the 2011 US National Health and Wellness Survey. Demographic and clinical data were collected. Analysis of HPQ data from two convenience samples examined item performance, factor structure, and HPQ measurement properties (reliability, convergent validity, known-groups validity). Analyses supported three HPQ domains: symptom concern (six items), compensatory behavior (five items), and worry (five items). Internal consistency was high for all three domains (all ≥0.75), supporting reliability. Convergent validity was supported by moderate Spearman correlations between HPQ domain scores and the Audit of Diabetes-Dependent Quality of Life (ADDQoL-19) total score. Patients with recent hypoglycemia events had significantly higher HPQ scores, supporting known-group validity. HPQ may be a valid and reliable measure capturing the experience and impact of hypoglycemia and useful in clinical trials and community-based settings.
Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.
Li, Qiang; Doi, Kunio
2006-04-01
Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.
NASA Astrophysics Data System (ADS)
Yugatama, A.; Rohmani, S.; Dewangga, A.
2018-03-01
Atorvastatin is the primary choice for dyslipidemia treatment. Due to patent expiration of atorvastatin, the pharmaceutical industry makes copy of the drug. Therefore, the development methods for tablet quality tests involving atorvastatin concentration on tablets needs to be performed. The purpose of this research was to develop and validate the simple atorvastatin tablet analytical method by HPLC. HPLC system used in this experiment consisted of column Cosmosil C18 (150 x 4,6 mm, 5 µm) as the stationary reverse phase chomatography, a mixture of methanol-water at pH 3 (80:20 v/v) as the mobile phase, flow rate of 1 mL/min, and UV detector at wavelength of 245 nm. Validation methods were including: selectivity, linearity, accuracy, precision, limit of detection (LOD), and limit of quantitation (LOQ). The results of this study indicate that the developed method had good validation including selectivity, linearity, accuracy, precision, LOD, and LOQ for analysis of atorvastatin tablet content. LOD and LOQ were 0.2 and 0.7 ng/mL, and the linearity range were 20 - 120 ng/mL.
Individual Differences in Reported Visual Imagery and Memory Performance.
ERIC Educational Resources Information Center
McKelvie, Stuart J.; Demers, Elizabeth G.
1979-01-01
High- and low-visualizing males, identified by the self-report VVIQ, participated in a memory experiment involving abstract words, concrete words, and pictures. High-visualizers were superior on all items in short-term recall but superior only on pictures in long-term recall, supporting the VVIQ's validity. (Author/SJL)
Optical simulations for experimental networks: lessons from MONET
NASA Astrophysics Data System (ADS)
Richards, Dwight H.; Jackel, Janet L.; Goodman, Matthew S.; Roudas, Ioannis; Wagner, Richard E.; Antoniades, Neophytos
1999-08-01
We have used optical simulations as a means of setting component requirements, assessing component compatibility, and designing experiments in the MONET (Multiwavelength Optical Networking) Project. This paper reviews the simulation method, gives some examples of the types of simulations that have been performed, and discusses the validation of the simulations.
Offshore Standards and Research Validation | Wind | NREL
Research Capabilities 35 years of wind turbine testing experience Custom high speed data acquisition system turbine testing expertise, NREL has developed instrumentation for high resolution measurements at sea by and technicians, who conduct a wide range of field measurements to verify turbine performance and
Active–passive soil moisture retrievals during the SMAP validation experiment 2012
USDA-ARS?s Scientific Manuscript database
The goal of this study is to assess the performance of the active–passive algorithm for the NASA Soil Moisture Active Passive mission (SMAP) using airborne and ground observations from a field campaign. The SMAP active–passive algorithm disaggregates the coarse-resolution radiometer brightness tempe...
DOT National Transportation Integrated Search
1992-08-01
This experiment was conducted to expand initial efforts to validate the requirement for normal color vision in Air Traffic Control Specialist (ATCS) personnel who work at en route center, terminal, and flight service station facilities. An enlarged d...
Assessing Students' Communication Skills: Validation of a Global Rating
ERIC Educational Resources Information Center
Scheffer, Simone; Muehlinghaus, Isabel; Froehmel, Annette; Ortwein, Heiderose
2008-01-01
Communication skills training is an accepted part of undergraduate medical programs nowadays. In addition to learning experiences its importance should be emphasised by performance-based assessment. As detailed checklists have been shown to be not well suited for the assessment of communication skills for different reasons, this study aimed to…
Perspectives on Veterinary Medical Education: The Tuskegee Experience.
ERIC Educational Resources Information Center
Adams, E. W.; Habtemariam, T.
The extent to which Veterinary Aptitude Test (VAT) scores are valid predictors of veterinary student performance and the effect of a summer enrichment program were assessed for Tuskegee Institute and Auburn University students. In addition, attention was directed to predictors of specialty choices and patterns of specialty choices and employment…
Techniques for obtaining subjective response to vertical vibration
NASA Technical Reports Server (NTRS)
Clarke, M. J.; Oborne, D. J.
1975-01-01
Laboratory experiments were performed to validate the techniques used for obtaining ratings in the field surveys carried out by the University College of Swansea. In addition, attempts were made to evaluate the basic form of the human response to vibration. Some of the results obtained by different methods are described.
A numerical tool for reproducing driver behaviour: experiments and predictive simulations.
Casucci, M; Marchitto, M; Cacciabue, P C
2010-03-01
This paper presents the simulation tool called SDDRIVE (Simple Simulation of Driver performance), which is the numerical computerised implementation of the theoretical architecture describing Driver-Vehicle-Environment (DVE) interactions, contained in Cacciabue and Carsten [Cacciabue, P.C., Carsten, O. A simple model of driver behaviour to sustain design and safety assessment of automated systems in automotive environments, 2010]. Following a brief description of the basic algorithms that simulate the performance of drivers, the paper presents and discusses a set of experiments carried out in a Virtual Reality full scale simulator for validating the simulation. Then the predictive potentiality of the tool is shown by discussing two case studies of DVE interactions, performed in the presence of different driver attitudes in similar traffic conditions.
NASA Astrophysics Data System (ADS)
Villa, Enrique; Cano, Juan L.; Aja, Beatriz; Terán, J. Vicente; de la Fuente, Luisa; Mediavilla, Ángel; Artal, Eduardo
2018-03-01
This paper describes the analysis, design and characterization of a polarimetric receiver developed for covering the 35 to 47 GHz frequency band in the new instrument aimed at completing the ground-based Q-U-I Joint Tenerife Experiment. This experiment is designed to measure polarization in the Cosmic Microwave Background. The described high frequency instrument is a HEMT-based array composed of 29 pixels. A thorough analysis of the behaviour of the proposed receiver, based on electronic phase switching, is presented for a noise-like linearly polarized input signal, obtaining simultaneously I, Q and U Stokes parameters of the input signal. Wideband subsystems are designed, assembled and characterized for the polarimeter. Their performances are described showing appropriate results within the 35-to-47 GHz frequency band. Functionality tests are performed at room and cryogenic temperatures with adequate results for both temperature conditions, which validate the receiver concept and performance.
Measured Radiation Patterns of the Boeing 91-Element ICAPA Antenna With Comparison to Calculations
NASA Technical Reports Server (NTRS)
Lambert, Kevin M.; Burke, Thomas (Technical Monitor)
2003-01-01
This report presents measured antenna patterns of the Boeing 91-Element Integrated Circuit Active Phased Array (ICAPA) Antenna at 19.85 GHz. These patterns were taken in support of various communication experiments that were performed using the antenna as a testbed. The goal here is to establish a foundation of the performance of the antenna for the experiments. An independent variable used in the communication experiments was the scan angle of the antenna. Therefore, the results presented here are patterns as a function of scan angle, at the stated frequency. Only a limited number of scan angles could be measured. Therefore, a computer program was written to simulate the pattern performance of the antenna at any scan angle. This program can be used to facilitate further study of the antenna. The computed patterns from this program are compared to the measured patterns as a means of validating the model.
Inter-hemispheric interaction facilitates face processing.
Compton, Rebecca J
2002-01-01
Many recent studies have revealed that interaction between the left and right cerebral hemispheres can aid in task performance, but these studies have tended to examine perception of simple stimuli such as letters, digits or simple shapes, which may have limited naturalistic validity. The present study extends these prior findings to a more naturalistic face perception task. Matching tasks required subjects to indicate when a target face matched one of two probe faces. Matches could be either across-field, requiring inter-hemispheric interaction, or within-field, not requiring inter-hemispheric interaction. Subjects indicated when faces matched in emotional expression (Experiment 1; n=32) or in character identity (Experiment 2; n=32). In both experiments, across-field performance was significantly better than within-field performance, supporting the primary hypothesis. Further, this advantage was greater for the more difficult character identity task. Results offer qualified support for the hypothesis that inter-hemispheric interaction is especially advantageous as task demands increase.
Dynamics of Exploding Plasma Within a Magnetized Plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimonte, G; Dipeso, G; Hewett, D
2002-02-01
This memo describes several possible laboratory experiments on the dynamics of an exploding plasma in a background magnetized plasma. These are interesting scientifically and the results are applicable to energetic explosions in the earth's ionosphere (DOE Campaign 7 at LLNL). These proposed experiments are difficult and can only be performed in the new LAPD device at UCLA. The purpose of these experiments would be to test numerical simulations, theory and reduced models for systems performance codes. The experiments are designed to investigate the affect of the background plasma on (1) the maximum diamagnetic bubble radius given by Eq. 9; andmore » (2) the Alfven wave radiation efficiency produced by the induced current J{sub A} (Eqs. 10-12) These experiments involve measuring the bubble radius using a fast gated optical imager as in Ref [1] and the Alfven wave profile and intensity as in Ref [2] for different values of the exploding plasma energy, background plasma density and temperature, and background magnetic field. These experiments extend the previously successful experiments [2] on Alfven wave coupling. We anticipate that the proposed experiments would require 1-2 weeks of time on the LAPD. We would perform PIC simulations in support of these experiments in order to validate the codes. Once validated, the PIC simulations would then be able to be extended to realistic ionospheric conditions with various size explosions and altitudes. In addition to the Alfven wave coupling, we are interested in the magnetic containment and transport of the exploding ''debris'' plasma to see if the shorting of the radial electric field in the magnetic bubble would allow the ions to propagate further. This has important implications in an ionospheric explosion because it defines the satellite damage region. In these experiments, we would field fast gated optical cameras to obtain images of the plasma expansion, which could then be correlated with magnetic probe measurements. In this regard, it would be most helpful to have a more powerful laser more than 10J in order to increase the extent of the magnetic bubble.« less
DeBourgh, Gregory A; Prion, Susan K
2017-03-22
Background Essential nursing skills for safe practice are not limited to technical skills, but include abilities for determining salience among clinical data within dynamic practice environments, demonstrating clinical judgment and reasoning, problem-solving abilities, and teamwork competence. Effective instructional methods are needed to prepare new nurses for entry-to-practice in contemporary healthcare settings. Method This mixed-methods descriptive study explored self-reported perceptions of a process to self-record videos for psychomotor skill performance evaluation in a convenience sample of 102 pre-licensure students. Results Students reported gains in confidence and skill acquisition using team skills to record individual videos of skill performance, and described the importance of teamwork, peer support, and deliberate practice. Conclusion Although time consuming, the production of student-directed video validations of psychomotor skill performance is an authentic task with meaningful accountabilities that is well-received by students as an effective, satisfying learner experience to increase confidence and competence in performing psychomotor skills.
The Second SeaWiFS HPLC Analysis Round-Robin Experiment (SeaHARRE-2)
NASA Technical Reports Server (NTRS)
2005-01-01
Eight international laboratories specializing in the determination of marine pigment concentrations using high performance liquid chromatography (HPLC) were intercompared using in situ samples and a variety of laboratory standards. The field samples were collected primarily from eutrophic waters, although mesotrophic waters were also sampled to create a dynamic range in chlorophyll concentration spanning approximately two orders of magnitude (0.3 25.8 mg m-3). The intercomparisons were used to establish the following: a) the uncertainties in quantitating individual pigments and higher-order variables (sums, ratios, and indices); b) an evaluation of spectrophotometric versus HPLC uncertainties in the determination of total chlorophyll a; and c) the reduction in uncertainties as a result of applying quality assurance (QA) procedures associated with extraction, separation, injection, degradation, detection, calibration, and reporting (particularly limits of detection and quantitation). In addition, the remote sensing requirements for the in situ determination of total chlorophyll a were investigated to determine whether or not the average uncertainty for this measurement is being satisfied. The culmination of the activity was a validation of the round-robin methodology plus the development of the requirements for validating an individual HPLC method. The validation process includes the measurements required to initially demonstrate a pigment is validated, and the measurements that must be made during sample analysis to confirm a method remains validated. The so-called performance-based metrics developed here describe a set of thresholds for a variety of easily-measured parameters with a corresponding set of performance categories. The aggregate set of performance parameters and categories establish a) the overall performance capability of the method, and b) whether or not the capability is consistent with the required accuracy objectives.
Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue
Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; ...
2017-10-01
Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less
Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey
Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less
Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue
NASA Astrophysics Data System (ADS)
Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; Bagliesi, Giuseppe; Belforte, Stephano; Campana, Simone; Dimou, Maria; Flix, Jose; Forti, Alessandra; di Girolamo, A.; Karavakis, Edward; Lammel, Stephan; Litmaath, Maarten; Sciaba, Andrea; Valassi, Andrea
2017-10-01
The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a model does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.
Benej, Martin; Bendlova, Bela; Vaclavikova, Eliska; Poturnajova, Martina
2011-10-06
Reliable and effective primary screening of mutation carriers is the key condition for common diagnostic use. The objective of this study is to validate the method high resolution melting (HRM) analysis for routine primary mutation screening and accomplish its optimization, evaluation and validation. Due to their heterozygous nature, germline point mutations of c-RET proto-oncogene, associated to multiple endocrine neoplasia type 2 (MEN2), are suitable for HRM analysis. Early identification of mutation carriers has a major impact on patients' survival due to early onset of medullary thyroid carcinoma (MTC) and resistance to conventional therapy. The authors performed a series of validation assays according to International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) guidelines for validation of analytical procedures, along with appropriate design and optimization experiments. After validated evaluation of HRM, the method was utilized for primary screening of 28 pathogenic c-RET mutations distributed among nine exons of c-RET gene. Validation experiments confirm the repeatability, robustness, accuracy and reproducibility of HRM. All c-RET gene pathogenic variants were detected with no occurrence of false-positive/false-negative results. The data provide basic information about design, establishment and validation of HRM for primary screening of genetic variants in order to distinguish heterozygous point mutation carriers among the wild-type sequence carriers. HRM analysis is a powerful and reliable tool for rapid and cost-effective primary screening, e.g., of c-RET gene germline and/or sporadic mutations and can be used as a first line potential diagnostic tool.
Integration and Validation of Hysteroscopy Simulation in the Surgical Training Curriculum.
Elessawy, Mohamed; Skrzipczyk, Moritz; Eckmann-Scholz, Christel; Maass, Nicolai; Mettler, Liselotte; Guenther, Veronika; van Mackelenbergh, Marion; Bauerschlag, Dirk O; Alkatout, Ibrahim
The primary objective of our study was to test the construct validity of the HystSim hysteroscopic simulator to determine whether simulation training can improve the acquisition of hysteroscopic skills regardless of the previous levels of experience of the participants. The secondary objective was to analyze the performance of a selected task, using specially designed scoring charts to help reduce the learning curve for both novices and experienced surgeons. The teaching of hysteroscopic intervention has received only scant attention, focusing mainly on the development of physical models and box simulators. This encouraged our working group to search for a suitable hysteroscopic simulator module and to test its validation. We decided to use the HystSim hysteroscopic simulator, which is one of the few such simulators that has already completed a validation process, with high ratings for both realism and training capacity. As a testing tool for our study, we selected the myoma resection task. We analyzed the results using the multimetric score system suggested by HystSim, allowing a more precise interpretation of the results. Between June 2014 and May 2015, our group collected data on 57 participants of minimally invasive surgical training courses at the Kiel School of Gynecological Endoscopy, Department of Gynecology and Obstetrics, University Hospitals Schleswig-Holstein, Campus Kiel. The novice group consisted of 42 medical students and residents with no prior experience in hysteroscopy, whereas the expert group consisted of 15 participants with more than 2 years of experience of advanced hysteroscopy operations. The overall results demonstrated that all participants attained significant improvements between their pretest and posttests, independent of their previous levels of experience (p < 0.002). Those in the expert group demonstrated statistically significant, superior scores in the pretest and posttests (p = 0.001, p = 0.006). Regarding visualization and ergonomics, the novices showed a better pretest value than the experts; however, the experts were able to improve significantly during the posttest. These precise findings demonstrated that the multimetric scoring system achieved several important objectives, including clinical relevance, critical relevance, and training motivation. All participants demonstrated improvements in their hysteroscopic skills, proving an adequate construct validation of the HystSim. Using the multimetric scoring system enabled a more accurate analysis of the performance of the participants independent of their levels of experience which could be an important key for streamlining the learning curve. Future studies testing the predictive validation of the simulator and frequency of the training intervals are necessary before the introduction of the simulator into the standard surgical training curriculum. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aly, A.; Avramova, Maria; Ivanov, Kostadin
To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed bymore » data from hydrogen experiments and PIE data.« less
Detection of tunnel excavation using fiber optic reflectometry: experimental validation
NASA Astrophysics Data System (ADS)
Linker, Raphael; Klar, Assaf
2013-06-01
Cross-border smuggling tunnels enable unmonitored movement of people and goods, and pose a severe threat to homeland security. In recent years, we have been working on the development of a system based on fiber- optic Brillouin time domain reflectometry (BOTDR) for detecting tunnel excavation. In two previous SPIE publications we have reported the initial development of the system as well as its validation using small-scale experiments. This paper reports, for the first time, results of full-scale experiments and discusses the system performance. The results confirm that distributed measurement of strain profiles in fiber cables buried at shallow depth enable detection of tunnel excavation, and by proper data processing, these measurements enable precise localization of the tunnel, as well as reasonable estimation of its depth.
NASA Astrophysics Data System (ADS)
Dufaux, Frederic
2011-06-01
The issue of privacy in video surveillance has drawn a lot of interest lately. However, thorough performance analysis and validation is still lacking, especially regarding the fulfillment of privacy-related requirements. In this paper, we first review recent Privacy Enabling Technologies (PET). Next, we discuss pertinent evaluation criteria for effective privacy protection. We then put forward a framework to assess the capacity of PET solutions to hide distinguishing facial information and to conceal identity. We conduct comprehensive and rigorous experiments to evaluate the performance of face recognition algorithms applied to images altered by PET. Results show the ineffectiveness of naïve PET such as pixelization and blur. Conversely, they demonstrate the effectiveness of more sophisticated scrambling techniques to foil face recognition.
Development and validation of a Malawian version of the primary care assessment tool.
Dullie, Luckson; Meland, Eivind; Hetlevik, Øystein; Mildestvedt, Thomas; Gjesdal, Sturla
2018-05-16
Malawi does not have validated tools for assessing primary care performance from patients' experience. The aim of this study was to develop a Malawian version of Primary Care Assessment Tool (PCAT-Mw) and to evaluate its reliability and validity in the assessment of the core primary care dimensions from adult patients' perspective in Malawi. A team of experts assessed the South African version of the primary care assessment tool (ZA-PCAT) for face and content validity. The adapted questionnaire underwent forward and backward translation and a pilot study. The tool was then used in an interviewer administered cross-sectional survey in Neno district, Malawi, to test validity and reliability. Exploratory factor analysis was performed on a random half of the sample to evaluate internal consistency, reliability and construct validity of items and scales. The identified constructs were then tested with confirmatory factor analysis. Likert scale assumption testing and descriptive statistics were done on the final factor structure. The PCAT-Mw was further tested for intra-rater and inter-rater reliability. From the responses of 631 patients, a 29-item PCAT-Mw was constructed comprising seven multi-item scales, representing five primary care dimensions (first contact, continuity, comprehensiveness, coordination and community orientation). All the seven scales achieved good internal consistency, item-total correlations and construct validity. Cronbach's alpha coefficient ranged from 0.66 to 0.91. A satisfactory goodness of fit model was achieved (GFI = 0.90, CFI = 0.91, RMSEA = 0.05, PCLOSE = 0.65). The full range of possible scores was observed for all scales. Scaling assumptions tests were achieved for all except the two comprehensiveness scales. Intra-class correlation coefficient (ICC) was 0.90 (n = 44, 95% CI 0.81-0.94, p < 0.001) for intra-rater reliability and 0.84 (n = 42, 95% CI 0.71-0.96, p < 0.001) for inter-rater reliability. Comprehensive metric analyses supported the reliability and validity of PCAT-Mw in assessing the core concepts of primary care from adult patients' experience. This tool could be used for health service research in primary care in Malawi.
USDA-ARS?s Scientific Manuscript database
The NASA SMAP (Soil Moisture Active Passive) mission conducted the SMAP Validation Experiment 2015 (SMAPVEX15) in order to support the calibration and validation activities of SMAP soil moisture data product.The main goals of the experiment were to address issues regarding the spatial disaggregation...
Exercise Performance Measurement with Smartphone Embedded Sensor for Well-Being Management
Liu, Chung-Tse; Chan, Chia-Tai
2016-01-01
Regular physical activity reduces the risk of many diseases and improves physical and mental health. However, physical inactivity is widespread globally. Improving physical activity levels is a global concern in well-being management. Exercise performance measurement systems have the potential to improve physical activity by providing feedback and motivation to users. We propose an exercise performance measurement system for well-being management that is based on the accumulated activity effective index (AAEI) and incorporates a smartphone-embedded sensor. The proposed system generates a numeric index that is based on users’ exercise performance: their level of physical activity and number of days spent exercising. The AAEI presents a clear number that can serve as a useful feedback and goal-setting tool. We implemented the exercise performance measurement system by using a smartphone and conducted experiments to assess the feasibility of the system and investigated the user experience. We recruited 17 participants for validating the feasibility of the measurement system and a total of 35 participants for investigating the user experience. The exercise performance measurement system showed an overall precision of 88% in activity level estimation. Users provided positive feedback about their experience with the exercise performance measurement system. The proposed system is feasible and has a positive effective on well-being management. PMID:27727188
Exercise Performance Measurement with Smartphone Embedded Sensor for Well-Being Management.
Liu, Chung-Tse; Chan, Chia-Tai
2016-10-11
Regular physical activity reduces the risk of many diseases and improves physical and mental health. However, physical inactivity is widespread globally. Improving physical activity levels is a global concern in well-being management. Exercise performance measurement systems have the potential to improve physical activity by providing feedback and motivation to users. We propose an exercise performance measurement system for well-being management that is based on the accumulated activity effective index (AAEI) and incorporates a smartphone-embedded sensor. The proposed system generates a numeric index that is based on users' exercise performance: their level of physical activity and number of days spent exercising. The AAEI presents a clear number that can serve as a useful feedback and goal-setting tool. We implemented the exercise performance measurement system by using a smartphone and conducted experiments to assess the feasibility of the system and investigated the user experience. We recruited 17 participants for validating the feasibility of the measurement system and a total of 35 participants for investigating the user experience. The exercise performance measurement system showed an overall precision of 88% in activity level estimation. Users provided positive feedback about their experience with the exercise performance measurement system. The proposed system is feasible and has a positive effective on well-being management.
Attribution of movement: Potential links between subjective reports of agency and output monitoring.
Sugimori, Eriko; Asai, Tomohisa
2015-01-01
According to agency memory theory, individuals decide whether "I did it" based on a memory trace of "I am doing it". The purpose of this study was to validate the agency memory theory. To this end, several hand actions were individually presented as samples, and participants were asked to perform the sample action, observe the performance of that action by another person, or imagine performing the action. Online feedback received by the participants during the action was manipulated among the different conditions, and output monitoring, in which participants were asked whether they had performed each hand action, was conducted. The rate at which respondents thought that they themselves had performed the action was higher when visual feedback was unaltered than when it was altered (Experiment 1A), and this tendency was observed across all types of altered feedback (Experiment 1B). The observation of an action performed by the hand of another person did not increase the rate at which respondents thought that they themselves had performed the action unless the participants actually did perform the action (Experiments 2A and 2B). In Experiment 3, a relationship was observed between the subjective feeling that "I am the one who is causing an action" and the memory that "I did perform the action". These experiments support the hypothesis that qualitative information and sense of "self" are tagged in a memory trace and that such tags can be used as cues for judgements when the memory is related to the "self".
Slip resistance of winter footwear on snow and ice measured using maximum achievable incline.
Hsu, Jennifer; Shaw, Robert; Novak, Alison; Li, Yue; Ormerod, Marcus; Newton, Rita; Dutta, Tilak; Fernie, Geoff
2016-05-01
Protective footwear is necessary for preventing injurious slips and falls in winter conditions. Valid methods for assessing footwear slip resistance on winter surfaces are needed in order to evaluate footwear and outsole designs. The purpose of this study was to utilise a method of testing winter footwear that was ecologically valid in terms of involving actual human testers walking on realistic winter surfaces to produce objective measures of slip resistance. During the experiment, eight participants tested six styles of footwear on wet ice, on dry ice, and on dry ice after walking over soft snow. Slip resistance was measured by determining the maximum incline angles participants were able to walk up and down in each footwear-surface combination. The results indicated that testing on a variety of surfaces is necessary for establishing winter footwear performance and that standard mechanical bench tests for footwear slip resistance do not adequately reflect actual performance. Practitioner Summary: Existing standardised methods for measuring footwear slip resistance lack validation on winter surfaces. By determining the maximum inclines participants could walk up and down slopes of wet ice, dry ice, and ice with snow, in a range of footwear, an ecologically valid test for measuring winter footwear performance was established.
Role of natural analogs in performance assessment of nuclear waste repositories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sagar, B.; Wittmeyer, G.W.
1995-09-01
Mathematical models of the flow of water and transport of radionuclides in porous media will be used to assess the ability of deep geologic repositories to safely contain nuclear waste. These models must, in some sense, be validated to ensure that they adequately describe the physical processes occurring within the repository and its geologic setting. Inasmuch as the spatial and temporal scales over which these models must be applied in performance assessment are very large, validation of these models against laboratory and small-scale field experiments may be considered inadequate. Natural analogs may provide validation data that are representative of physico-chemicalmore » processes that occur over spatial and temporal scales as large or larger than those relevant to repository design. The authors discuss the manner in which natural analog data may be used to increase confidence in performance assessment models and conclude that, while these data may be suitable for testing the basic laws governing flow and transport, there is insufficient control of boundary and initial conditions and forcing functions to permit quantitative validation of complex, spatially distributed flow and transport models. The authors also express their opinion that, for collecting adequate data from natural analogs, resources will have to be devoted to them that are much larger than are devoted to them at present.« less
Slip resistance of winter footwear on snow and ice measured using maximum achievable incline
Hsu, Jennifer; Shaw, Robert; Novak, Alison; Li, Yue; Ormerod, Marcus; Newton, Rita; Dutta, Tilak; Fernie, Geoff
2016-01-01
Abstract Protective footwear is necessary for preventing injurious slips and falls in winter conditions. Valid methods for assessing footwear slip resistance on winter surfaces are needed in order to evaluate footwear and outsole designs. The purpose of this study was to utilise a method of testing winter footwear that was ecologically valid in terms of involving actual human testers walking on realistic winter surfaces to produce objective measures of slip resistance. During the experiment, eight participants tested six styles of footwear on wet ice, on dry ice, and on dry ice after walking over soft snow. Slip resistance was measured by determining the maximum incline angles participants were able to walk up and down in each footwear–surface combination. The results indicated that testing on a variety of surfaces is necessary for establishing winter footwear performance and that standard mechanical bench tests for footwear slip resistance do not adequately reflect actual performance. Practitioner Summary: Existing standardised methods for measuring footwear slip resistance lack validation on winter surfaces. By determining the maximum inclines participants could walk up and down slopes of wet ice, dry ice, and ice with snow, in a range of footwear, an ecologically valid test for measuring winter footwear performance was established. PMID:26555738
Kang, Sung Gu; Cho, Seok; Kang, Seok Ho; Haidar, Abdul Muhsin; Samavedi, Srinivas; Palmer, Kenneth J; Patel, Vipul R; Cheon, Jun
2014-08-01
To better use virtual reality robotic simulators and offer surgeons more practical exercises, we developed the Tube 3 module for practicing vesicourethral anastomosis (VUA), one of the most complex steps in the robot-assisted radical prostatectomy procedure. Herein, we describe the principle of the Tube 3 module and evaluate its face, content, and construct validity. Residents and attending surgeons participated in a prospective study approved by the institutional review board. We divided subjects into 2 groups, those with experience and novices. Each subject performed a simulated VUA using the Tube 3 module. A built-in scoring algorithm recorded the data from each performance. After completing the Tube 3 module exercise, each subject answered a questionnaire to provide data to be used for face and content validation. The novice group consisted of 10 residents. The experienced subjects (n = 10) had each previously performed at least 10 robotic surgeries. The experienced group outperformed the novice group in most variables, including task time, total score, total economy of motion, and number of instrument collisions (P <.05). Additionally, 80% of the experienced surgeons agreed that the module reflects the technical skills required to perform VUA and would be a useful training tool. We describe the Tube 3 module for practicing VUA, which showed excellent face, content, and construct validity. The task needs to be refined in the future to reflect VUA under real operating conditions, and concurrent and predictive validity studies are currently underway. Copyright © 2014 Elsevier Inc. All rights reserved.
Components of executive functioning in metamemory.
Mäntylä, Timo; Rönnlund, Michael; Kliegel, Matthias
2010-10-01
This study examined metamemory in relation to three basic executive functions (set shifting, working memory updating, and response inhibition) measured as latent variables. Young adults (Experiment 1) and middle-aged adults (Experiment 2) completed a set of executive functioning tasks and the Prospective and Retrospective Memory Questionnaire (PRMQ). In Experiment 1, source recall and face recognition tasks were included as indicators of objective memory performance. In both experiments, analyses of the executive functioning data yielded a two-factor solution, with the updating and inhibition tasks constituting a common factor and the shifting tasks a separate factor. Self-reported memory problems showed low predictive validity, but subjective and objective memory performance were related to different components of executive functioning. In both experiments, set shifting, but not updating and inhibition, was related to PRMQ, whereas source recall showed the opposite pattern of correlations in Experiment 1. These findings suggest that metamemorial judgments reflect selective effects of executive functioning and that individual differences in mental flexibility contribute to self-beliefs of efficacy.
NASA Astrophysics Data System (ADS)
Fix, A.; Ehret, G.; Flentje, H.; Poberaj, G.; Gottwald, M.; Finkenzeller, H.; Bremer, H.; Bruns, M.; Burrows, J. P.; Kleinböhl, A.; Küllmann, H.; Kuttippurath, J.; Richter, A.; Wang, P.; Heue, K.-P.; Platt, U.; Wagner, T.
2004-12-01
For the first time three different remote sensing instruments - a sub-millimeter radiometer, a differential optical absorption spectrometer in the UV-visible spectral range, and a lidar - were deployed aboard DLR's meteorological research aircraft Falcon 20 to validate a large number of SCIAMACHY level 2 and off-line data products such as O3, NO2, N2O, BrO, OClO, H2O, aerosols, and clouds. Within two main validation campaigns of the SCIA-VALUE mission (SCIAMACHY VALidation and Utilization Experiment) extended latitudinal cross-sections stretching from polar regions to the tropics as well as longitudinal cross sections at polar latitudes at about 70° N and the equator have been generated. This contribution gives an overview over the campaigns performed and reports on the observation strategy for achieving the validation goals. We also emphasize the synergetic use of the novel set of aircraft instrumentation and the usefulness of this innovative suite of remote sensing instruments for satellite validation.
NASA Astrophysics Data System (ADS)
Fix, A.; Ehret, G.; Flentje, H.; Poberaj, G.; Gottwald, M.; Finkenzeller, H.; Bremer, H.; Bruns, M.; Burrows, J. P.; Kleinböhl, A.; Küllmann, H.; Kuttippurath, J.; Richter, A.; Wang, P.; Heue, K.-P.; Platt, U.; Pundt, I.; Wagner, T.
2005-05-01
For the first time three different remote sensing instruments - a sub-millimeter radiometer, a differential optical absorption spectrometer in the UV-visible spectral range, and a lidar - were deployed aboard DLR's meteorological research aircraft Falcon 20 to validate a large number of SCIAMACHY level 2 and off-line data products such as O3, NO2, N2O, BrO, OClO, H2O, aerosols, and clouds. Within two validation campaigns of the SCIA-VALUE mission (SCIAMACHY VALidation and Utilization Experiment) extended latitudinal cross-sections stretching from polar regions to the tropics as well as longitudinal cross sections at polar latitudes at about 70° N and the equator were generated. This contribution gives an overview over the campaigns performed and reports on the observation strategy for achieving the validation goals. We also emphasize the synergetic use of the novel set of aircraft instrumentation and the usefulness of this innovative suite of remote sensing instruments for satellite validation.
Further development of the talent development environment questionnaire for sport.
Li, Chunxiao; Wang, Chee Keng John; Pyun, Do Young; Martindale, Russell
2015-01-01
Given the significance of monitoring the critical environmental factors that facilitate athlete performance, this two-phase research aimed to validate and refine the revised talent development environment questionnaire (TDEQ). The TDEQ is a multidimensional self-report scale that assesses talented athletes' environmental experiences. Study 1 (the first phase) involved the examination of the revised TDEQ through an exploratory factor analysis (n = 363). This exploratory investigation identified a 28-item five-factor structure (i.e., TDEQ-5) with adequate internal consistency. Study 2 (the second phase) examined the factorial structure of the TDEQ-5, including convergent validity, discriminant validity, and group invariance (i.e., gender and sports type). The second phase was carried out with 496 talented athletes through the application of confirmatory factor analyses and multigroup invariance tests. The results supported the convergent validity, discriminant validity, and group invariance of the TDEQ-5. In conclusion, the TDEQ-5 with 25 items appears to be a reliable and valid scale for use in talent development environments.
Determination of lipophilic toxins by LC/MS/MS: single-laboratory validation.
Villar-González, Adriano; Rodríguez-Velasco, María Luisa; Gago-Martínez, Ana
2011-01-01
An LC/MS/MS method has been developed, assessed, and intralaboratory-validated for the analysis of the lipophilic toxins currently regulated by European Union legislation: okadaic acid (OA) and dinophysistoxins 1 and 2, including their ester forms; azaspiracids 1, 2, and 3; pectenotoxins 1 and 2; yessotoxin (YTX), and the analogs 45 OH-YTX, Homo YTX, and 45 OH-Homo YTX; as well as for the analysis of 13-desmetil-spirolide C. The method consists of duplicate sample extraction with methanol and direct analysis of the crude extract without further cleanup or concentration. Ester forms of OA and dinophysistoxins are detected as the parent ions after alkaline hydrolysis of the extract. The validation process of this method was performed using both fortified and naturally contaminated samples, and experiments were designed according to International Organization for Standardization, International Union of Pure and Applied Chemistry, and AOAC guidelines. With the exception of YTX in fortified samples, RSDr below 15% and RSDR were below 25%. Recovery values were between 77 and 95%, and LOQs were below 60 microg/kg. These data together with validation experiments for recovery, selectivity, robustness, traceability, and linearity, as well as uncertainty calculations, are presented in this paper.
Toward Supersonic Retropropulsion CFD Validation
NASA Technical Reports Server (NTRS)
Kleb, Bil; Schauerhamer, D. Guy; Trumble, Kerry; Sozer, Emre; Barnhardt, Michael; Carlson, Jan-Renee; Edquist, Karl
2011-01-01
This paper begins the process of verifying and validating computational fluid dynamics (CFD) codes for supersonic retropropulsive flows. Four CFD codes (DPLR, FUN3D, OVERFLOW, and US3D) are used to perform various numerical and physical modeling studies toward the goal of comparing predictions with a wind tunnel experiment specifically designed to support CFD validation. Numerical studies run the gamut in rigor from code-to-code comparisons to observed order-of-accuracy tests. Results indicate that this complex flowfield, involving time-dependent shocks and vortex shedding, design order of accuracy is not clearly evident. Also explored is the extent of physical modeling necessary to predict the salient flowfield features found in high-speed Schlieren images and surface pressure measurements taken during the validation experiment. Physical modeling studies include geometric items such as wind tunnel wall and sting mount interference, as well as turbulence modeling that ranges from a RANS (Reynolds-Averaged Navier-Stokes) 2-equation model to DES (Detached Eddy Simulation) models. These studies indicate that tunnel wall interference is minimal for the cases investigated; model mounting hardware effects are confined to the aft end of the model; and sparse grid resolution and turbulence modeling can damp or entirely dissipate the unsteadiness of this self-excited flow.
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Riha, David S.
2013-01-01
Physics-based models are routinely used to predict the performance of engineered systems to make decisions such as when to retire system components, how to extend the life of an aging system, or if a new design will be safe or available. Model verification and validation (V&V) is a process to establish credibility in model predictions. Ideally, carefully controlled validation experiments will be designed and performed to validate models or submodels. In reality, time and cost constraints limit experiments and even model development. This paper describes elements of model V&V during the development and application of a probabilistic fracture assessment model to predict cracking in space shuttle main engine high-pressure oxidizer turbopump knife-edge seals. The objective of this effort was to assess the probability of initiating and growing a crack to a specified failure length in specific flight units for different usage and inspection scenarios. The probabilistic fracture assessment model developed in this investigation combined a series of submodels describing the usage, temperature history, flutter tendencies, tooth stresses and numbers of cycles, fatigue cracking, nondestructive inspection, and finally the probability of failure. The analysis accounted for unit-to-unit variations in temperature, flutter limit state, flutter stress magnitude, and fatigue life properties. The investigation focused on the calculation of relative risk rather than absolute risk between the usage scenarios. Verification predictions were first performed for three units with known usage and cracking histories to establish credibility in the model predictions. Then, numerous predictions were performed for an assortment of operating units that had flown recently or that were projected for future flights. Calculations were performed using two NASA-developed software tools: NESSUS(Registered Trademark) for the probabilistic analysis, and NASGRO(Registered Trademark) for the fracture mechanics analysis. The goal of these predictions was to provide additional information to guide decisions on the potential of reusing existing and installed units prior to the new design certification.
Electrolysis Performance Improvement Concept Study (EPICS) flight experiment phase C/D
NASA Technical Reports Server (NTRS)
Schubert, F. H.; Lee, M. G.
1995-01-01
The overall purpose of the Electrolysis Performance Improvement Concept Study flight experiment is to demonstrate and validate in a microgravity environment the Static Feed Electrolyzer concept as well as investigate the effect of microgravity on water electrolysis performance. The scope of the experiment includes variations in microstructural characteristics of electrodes and current densities in a static feed electrolysis cell configuration. The results of the flight experiment will be used to improve efficiency of the static feed electrolysis process and other electrochemical regenerative life support processes by reducing power and expanding the operational range. Specific technologies that will benefit include water electrolysis for propulsion, energy storage, life support, extravehicular activity, in-space manufacturing and in-space science in addition to other electrochemical regenerative life support technologies such as electrochemical carbon dioxide and oxygen separation, electrochemical oxygen compression and water vapor electrolysis. The Electrolysis Performance Improvement Concept Study flight experiment design incorporates two primary hardware assemblies: the Mechanical/Electrochemical Assembly and the Control/Monitor Instrumentation. The Mechanical/Electrochemical Assembly contains three separate integrated electrolysis cells along with supporting pressure and temperature control components. The Control/Monitor Instrumentation controls the operation of the experiment via the Mechanical/Electrochemical Assembly components and provides for monitoring and control of critical parameters and storage of experimental data.
Development of a new instrument for determining the level of chewing function in children.
Serel Arslan, S; Demir, N; Barak Dolgun, A; Karaduman, A A
2016-07-01
This study aimed to develop a chewing performance scale that classifies chewing from normal to severely impaired and to investigate its validity and reliability. The study included the developmental phase and reported the content, structural, criterion validity, interobserver and intra-observer reliability of the chewing performance scale, which was called the Karaduman Chewing Performance Scale (KCPS). A dysphagia literature review, other questionnaires and clinical experiences were used in the developmental phase. Seven experts assessed the steps for content validity over two Delphi rounds. To test structural, criterion validity, interobserver and intra-observer reliability, two swallowing therapists evaluated chewing videos of 144 children (Group I: 61 healthy children without chewing disorders, mean age of 42·38 ± 9·36 months; Group II: 83 children with cerebral palsy who have chewing disorders, mean age of 39·09 ± 22·95 months) using KCPS. The Behavioral Pediatrics Feeding Assessment Scale (BPFAS) was used for criterion validity. The KCPS steps arranged between 0-4 were found to be necessary. The content validity index was 0·885. The KCPS levels were found to be different between groups I and II (χ(2) = 123·286, P < 0·001). A moderately strong positive correlation was found between the KCPS and the subscales of the BPFAS (r = 0·444-0·773, P < 0·001). An excellent positive correlation was detected between two swallowing therapists and between two examinations of one swallowing therapist (r = 0·962, P < 0·001; r = 0·990, P < 0·001, respectively). The KCPS is a valid, reliable, quick and clinically easy-to-use functional instrument for determining the level of chewing function in children. © 2016 John Wiley & Sons Ltd.
Assimilation of satellite altimeter data in a primitive-equation model of the Azores Madeira region
NASA Astrophysics Data System (ADS)
Gavart, Michel; De Mey, Pierre; Caniaux, Guy
1999-07-01
The aim of this study is to implement satellite altimetric assimilation into a high-resolution primitive-equation ocean model and check the validity and sensitivity of the results. Beyond this paper, the remote objective is to get a dynamical tool capable of simulating the surface ocean processes linked to the air-sea interactions as well as to perform mesoscale ocean forecasting. For computational cost and practical reasons, this study takes place in a 1000 by 1000 sq km open domain of the Canary basin. The assimilation experiments are carried out with the combined TOPEX/POSEIDON and ERS-1 data sets between June 1993 and December 1993. The space-time domain overlaps with in situ data collected during the SEMAPHORE experiment and thus enables an objective validation of the results. A special boundary treatment is applied to the model by creating a surrounding recirculating area separated from the interior by a buffer zone. The altimetric assimilation is done by implementing a reduced-order optimal interpolation algorithm with a special vertical projection of the surface model/data misfits. We perform a first experiment with a vertical projection onto an isopycnal EOF representing the Azores Current vertical variability. An objective validation of the model's velocities with Lagrangian float data shows good results (the correlation is 0.715 at 150 dbar). The question of the sensitivity to the vertical projection is addressed by performing similar experiments using a method for lifting/lowering of the water column, and using an EOF in Z-coordinates. Some comparisons with in situ temperature data do not show any significant difference between the three projections, after five months of assimilation. However, in order to preserve the large-scale water characteristics, we felt that the isopycnal projection was a more physically consistent choice. Then, the complementary character of the two satellites is assessed with two additional experiments which use each altimeter data sets separately. There is an evidence of the benefit of combining the two data sets. Otherwise, an experiment assimilating long-wavelength bias-corrected CLS altimetric maps every 10 days exhibits the best correlation scores and emphasizes the importance of reducing the orbit error and biases in the altimetric data sets. The surface layers of the model are forced using realistic daily wind stress values computed from ECMWF analyses. Although we resolve small space and time scales, in our limited domain the wind stress does not significantly influence the quality of the results obtained with the altimetric assimilation. Finally, the relative effects of the data selection procedure and of the integration times (cycle lengths) is explored by performing data window experiments. A value of 10 days seems to be the most satisfactory cycle length.
Designing biomedical proteomics experiments: state-of-the-art and future perspectives.
Maes, Evelyne; Kelchtermans, Pieter; Bittremieux, Wout; De Grave, Kurt; Degroeve, Sven; Hooyberghs, Jef; Mertens, Inge; Baggerman, Geert; Ramon, Jan; Laukens, Kris; Martens, Lennart; Valkenborg, Dirk
2016-05-01
With the current expanded technical capabilities to perform mass spectrometry-based biomedical proteomics experiments, an improved focus on the design of experiments is crucial. As it is clear that ignoring the importance of a good design leads to an unprecedented rate of false discoveries which would poison our results, more and more tools are developed to help researchers designing proteomic experiments. In this review, we apply statistical thinking to go through the entire proteomics workflow for biomarker discovery and validation and relate the considerations that should be made at the level of hypothesis building, technology selection, experimental design and the optimization of the experimental parameters.
Thermal performance evaluation of the infrared telescope dewar subsystem
NASA Technical Reports Server (NTRS)
Urban, E. W.
1986-01-01
Thermal performance evaluations (TPE) were conducted with the superfluid helium dewar of the Infrared Telescope (IRT) experiment from November 1981 to August 1982. Test included measuring key operating parameters, simulating operations with an attached instrument cryostat and validating servicing, operating and safety procedures. Test activities and results are summarized. All objectives are satisfied except for those involving transfer of low pressure liquid helium (LHe) from a supply dewar into the dewar subsystem.
Virtual reality simulator training for laparoscopic colectomy: what metrics have construct validity?
Shanmugan, Skandan; Leblanc, Fabien; Senagore, Anthony J; Ellis, C Neal; Stein, Sharon L; Khan, Sadaf; Delaney, Conor P; Champagne, Bradley J
2014-02-01
Virtual reality simulation for laparoscopic colectomy has been used for training of surgical residents and has been considered as a model for technical skills assessment of board-eligible colorectal surgeons. However, construct validity (the ability to distinguish between skill levels) must be confirmed before widespread implementation. This study was designed to specifically determine which metrics for laparoscopic sigmoid colectomy have evidence of construct validity. General surgeons that had performed fewer than 30 laparoscopic colon resections and laparoscopic colorectal experts (>200 laparoscopic colon resections) performed laparoscopic sigmoid colectomy on the LAP Mentor model. All participants received a 15-minute instructional warm-up and had never used the simulator before the study. Performance was then compared between each group for 21 metrics (procedural, 14; intraoperative errors, 7) to determine specifically which measurements demonstrate construct validity. Performance was compared with the Mann-Whitney U-test (p < 0.05 was significant). Fifty-three surgeons; 29 general surgeons, and 24 colorectal surgeons enrolled in the study. The virtual reality simulators for laparoscopic sigmoid colectomy demonstrated construct validity for 8 of 14 procedural metrics by distinguishing levels of surgical experience (p < 0.05). The most discriminatory procedural metrics (p < 0.01) favoring experts were reduced instrument path length, accuracy of the peritoneal/medial mobilization, and dissection of the inferior mesenteric artery. Intraoperative errors were not discriminatory for most metrics and favored general surgeons for colonic wall injury (general surgeons, 0.7; colorectal surgeons, 3.5; p = 0.045). Individual variability within the general surgeon and colorectal surgeon groups was not accounted for. The virtual reality simulators for laparoscopic sigmoid colectomy demonstrated construct validity for 8 procedure-specific metrics. However, using virtual reality simulator metrics to detect intraoperative errors did not discriminate between groups. If the virtual reality simulator continues to be used for the technical assessment of trainees and board-eligible surgeons, the evaluation of performance should be limited to procedural metrics.
Advances in shock timing experiments on the National Ignition Facility
NASA Astrophysics Data System (ADS)
Robey, H. F.; Celliers, P. M.; Moody, J. D.; Sater, J.; Parham, T.; Kozioziemski, B.; Dylla-Spears, R.; Ross, J. S.; LePape, S.; Ralph, J. E.; Hohenberger, M.; Dewald, E. L.; Berzak Hopkins, L.; Kroll, J. J.; Yoxall, B. E.; Hamza, A. V.; Boehly, T. R.; Nikroo, A.; Landen, O. L.; Edwards, M. J.
2016-03-01
Recent advances in shock timing experiments and analysis techniques now enable shock measurements to be performed in cryogenic deuterium-tritium (DT) ice layered capsule implosions on the National Ignition Facility (NIF). Previous measurements of shock timing in inertial confinement fusion (ICF) implosions were performed in surrogate targets, where the solid DT ice shell and central DT gas were replaced with a continuous liquid deuterium (D2) fill. These previous experiments pose two surrogacy issues: a material surrogacy due to the difference of species (D2 vs. DT) and densities of the materials used and a geometric surrogacy due to presence of an additional interface (ice/gas) previously absent in the liquid-filled targets. This report presents experimental data and a new analysis method for validating the assumptions underlying this surrogate technique.
Testing and validating environmental models
Kirchner, J.W.; Hooper, R.P.; Kendall, C.; Neal, C.; Leavesley, G.
1996-01-01
Generally accepted standards for testing and validating ecosystem models would benefit both modellers and model users. Universally applicable test procedures are difficult to prescribe, given the diversity of modelling approaches and the many uses for models. However, the generally accepted scientific principles of documentation and disclosure provide a useful framework for devising general standards for model evaluation. Adequately documenting model tests requires explicit performance criteria, and explicit benchmarks against which model performance is compared. A model's validity, reliability, and accuracy can be most meaningfully judged by explicit comparison against the available alternatives. In contrast, current practice is often characterized by vague, subjective claims that model predictions show 'acceptable' agreement with data; such claims provide little basis for choosing among alternative models. Strict model tests (those that invalid models are unlikely to pass) are the only ones capable of convincing rational skeptics that a model is probably valid. However, 'false positive' rates as low as 10% can substantially erode the power of validation tests, making them insufficiently strict to convince rational skeptics. Validation tests are often undermined by excessive parameter calibration and overuse of ad hoc model features. Tests are often also divorced from the conditions under which a model will be used, particularly when it is designed to forecast beyond the range of historical experience. In such situations, data from laboratory and field manipulation experiments can provide particularly effective tests, because one can create experimental conditions quite different from historical data, and because experimental data can provide a more precisely defined 'target' for the model to hit. We present a simple demonstration showing that the two most common methods for comparing model predictions to environmental time series (plotting model time series against data time series, and plotting predicted versus observed values) have little diagnostic power. We propose that it may be more useful to statistically extract the relationships of primary interest from the time series, and test the model directly against them.
CFD validation experiments for hypersonic flows
NASA Technical Reports Server (NTRS)
Marvin, Joseph G.
1992-01-01
A roadmap for CFD code validation is introduced. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments could provide new validation data.
SATS HVO Concept Validation Experiment
NASA Technical Reports Server (NTRS)
Consiglio, Maria; Williams, Daniel; Murdoch, Jennifer; Adams, Catherine
2005-01-01
A human-in-the-loop simulation experiment was conducted at the NASA Langley Research Center s (LaRC) Air Traffic Operations Lab (ATOL) in an effort to comprehensively validate tools and procedures intended to enable the Small Aircraft Transportation System, Higher Volume Operations (SATS HVO) concept of operations. The SATS HVO procedures were developed to increase the rate of operations at non-towered, non-radar airports in near all-weather conditions. A key element of the design is the establishment of a volume of airspace around designated airports where pilots accept responsibility for self-separation. Flights operating at these airports, are given approach sequencing information computed by a ground based automated system. The SATS HVO validation experiment was conducted in the ATOL during the spring of 2004 in order to determine if a pilot can safely and proficiently fly an airplane while performing SATS HVO procedures. Comparative measures of flight path error, perceived workload and situation awareness were obtained for two types of scenarios. Baseline scenarios were representative of today s system utilizing procedure separation, where air traffic control grants one approach or departure clearance at a time. SATS HVO scenarios represented approaches and departure procedures as described in the SATS HVO concept of operations. Results from the experiment indicate that low time pilots were able to fly SATS HVO procedures and maintain self-separation as safely and proficiently as flying today's procedures.
Statistical analysis of target acquisition sensor modeling experiments
NASA Astrophysics Data System (ADS)
Deaver, Dawne M.; Moyer, Steve
2015-05-01
The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.
DE-NE0008277_PROTEUS final technical report 2018
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enqvist, Andreas
This project details re-evaluations of experiments of gas-cooled fast reactor (GCFR) core designs performed in the 1970s at the PROTEUS reactor and create a series of International Reactor Physics Experiment Evaluation Project (IRPhEP) benchmarks. Currently there are no gas-cooled fast reactor (GCFR) experiments available in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). These experiments are excellent candidates for reanalysis and development of multiple benchmarks because these experiments provide high-quality integral nuclear data relevant to the validation and refinement of thorium, neptunium, uranium, plutonium, iron, and graphite cross sections. It would be cost prohibitive to reproduce suchmore » a comprehensive suite of experimental data to support any future GCFR endeavors.« less
Enhanced Facilitation of Spatial Attention in Schizophrenia
Spencer, Kevin M.; Nestor, Paul G.; Valdman, Olga; Niznikiewicz, Margaret A.; Shenton, Martha E.; McCarley, Robert W.
2010-01-01
Objective While attentional functions are usually found to be impaired in schizophrenia, a review of the literature on the orienting of spatial attention in schizophrenia suggested that voluntary attentional orienting in response to a valid cue might be paradoxically enhanced. We tested this hypothesis with orienting tasks involving the cued detection of a laterally-presented target stimulus. Method Subjects were chronic schizophrenia patients (SZ) and matched healthy control subjects (HC). In Experiment 1 (15 SZ, 16 HC), cues were endogenous (arrows) and could be valid (100% predictive) or neutral with respect to the subsequent target position. In Experiment 2 (16 SZ, 16 HC), subjects performed a standard orienting task with unpredictive exogenous cues (brightening of the target boxes). Results In Experiment 1, SZ showed a larger attentional facilitation effect on reaction time than HC. In Experiment 2, no clear sign of enhanced attentional facilitation was found in SZ. Conclusions The voluntary, facilitatory shifting of spatial attention may be relatively enhanced in individuals with schizophrenia in comparison to healthy individuals. This effect bears resemblance to other relative enhancements of information processing in schizophrenia such as saccade speed and semantic priming. PMID:20919764
Enhanced facilitation of spatial attention in schizophrenia.
Spencer, Kevin M; Nestor, Paul G; Valdman, Olga; Niznikiewicz, Margaret A; Shenton, Martha E; McCarley, Robert W
2011-01-01
While attentional functions are usually found to be impaired in schizophrenia, a review of the literature on the orienting of spatial attention in schizophrenia suggested that voluntary attentional orienting in response to a valid cue might be paradoxically enhanced. We tested this hypothesis with orienting tasks involving the cued detection of a laterally presented target stimulus. Subjects were chronic schizophrenia patients (SZ) and matched healthy control subjects (HC). In Experiment 1 (15 SZ, 16 HC), cues were endogenous (arrows) and could be valid (100% predictive) or neutral with respect to the subsequent target position. In Experiment 2 (16 SZ, 16 HC), subjects performed a standard orienting task with unpredictive exogenous cues (brightening of the target boxes). In Experiment 1, SZ showed a larger attentional facilitation effect on reaction time than HC. In Experiment 2, no clear sign of enhanced attentional facilitation was found in SZ. The voluntary, facilitatory shifting of spatial attention may be relatively enhanced in individuals with schizophrenia in comparison to healthy individuals. This effect bears resemblance to other relative enhancements of information processing in schizophrenia such as saccade speed and semantic priming. (c) 2010 APA, all rights reserved.
Experimental validation of prototype high voltage bushing
NASA Astrophysics Data System (ADS)
Shah, Sejal; Tyagi, H.; Sharma, D.; Parmar, D.; M. N., Vishnudev; Joshi, K.; Patel, K.; Yadav, A.; Patel, R.; Bandyopadhyay, M.; Rotti, C.; Chakraborty, A.
2017-08-01
Prototype High voltage bushing (PHVB) is a scaled down configuration of DNB High Voltage Bushing (HVB) of ITER. It is designed for operation at 50 kV DC to ensure operational performance and thereby confirming the design configuration of DNB HVB. Two concentric insulators viz. Ceramic and Fiber reinforced polymer (FRP) rings are used as double layered vacuum boundary for 50 kV isolation between grounded and high voltage flanges. Stress shields are designed for smooth electric field distribution. During ceramic to Kovar brazing, spilling cannot be controlled which may lead to high localized electrostatic stress. To understand spilling phenomenon and precise stress calculation, quantitative analysis was performed using Scanning Electron Microscopy (SEM) of brazed sample and similar configuration modeled while performing the Finite Element (FE) analysis. FE analysis of PHVB is performed to find out electrical stresses on different areas of PHVB and are maintained similar to DNB HV Bushing. With this configuration, the experiment is performed considering ITER like vacuum and electrical parameters. Initial HV test is performed by temporary vacuum sealing arrangements using gaskets/O-rings at both ends in order to achieve desired vacuum and keep the system maintainable. During validation test, 50 kV voltage withstand is performed for one hour. Voltage withstand test for 60 kV DC (20% higher rated voltage) have also been performed without any breakdown. Successful operation of PHVB confirms the design of DNB HV Bushing. In this paper, configuration of PHVB with experimental validation data is presented.
Engkvist, I-L; Eklund, J; Krook, J; Björkman, M; Sundin, E; Svensson, R; Eklund, M
2010-05-01
Recycling is a new and developing industry, which has only been researched to a limited extent. This article describes the development and use of instruments for data collection within a multidisciplinary research programme "Recycling centres in Sweden - working conditions, environmental and system performance". The overall purpose of the programme was to form a basis for improving the function of recycling centres with respect to these three perspectives and the disciplines of: ergonomics, safety, external environment, and production systems. A total of 10 instruments were developed for collecting data from employees, managers and visitors at recycling centres, including one instrument for observing visitors. Validation tests were performed in several steps. This, along with the quality of the collected data, and experience from the data collection, showed that the instruments and methodology used were valid and suitable for their purpose. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Integration and validation testing for PhEDEx, DBS and DAS with the PhEDEx LifeCycle agent
NASA Astrophysics Data System (ADS)
Boeser, C.; Chwalek, T.; Giffels, M.; Kuznetsov, V.; Wildish, T.
2014-06-01
The ever-increasing amount of data handled by the CMS dataflow and workflow management tools poses new challenges for cross-validation among different systems within CMS experiment at LHC. To approach this problem we developed an integration test suite based on the LifeCycle agent, a tool originally conceived for stress-testing new releases of PhEDEx, the CMS data-placement tool. The LifeCycle agent provides a framework for customising the test workflow in arbitrary ways, and can scale to levels of activity well beyond those seen in normal running. This means we can run realistic performance tests at scales not likely to be seen by the experiment for some years, or with custom topologies to examine particular situations that may cause concern some time in the future. The LifeCycle agent has recently been enhanced to become a general purpose integration and validation testing tool for major CMS services. It allows cross-system integration tests of all three components to be performed in controlled environments, without interfering with production services. In this paper we discuss the design and implementation of the LifeCycle agent. We describe how it is used for small-scale debugging and validation tests, and how we extend that to large-scale tests of whole groups of sub-systems. We show how the LifeCycle agent can emulate the action of operators, physicists, or software agents external to the system under test, and how it can be scaled to large and complex systems.
Havemann, Maria Cecilie; Dalsgaard, Torur; Sørensen, Jette Led; Røssaak, Kristin; Brisling, Steffen; Mosgaard, Berit Jul; Høgdall, Claus; Bjerrum, Flemming
2018-05-14
Increasing focus on patient safety makes it important to ensure surgical competency among surgeons before operating on patients. The objective was to gather validity evidence for a virtual-reality simulator test for robotic surgical skills and evaluate its potential as a training tool. Surgeons with varying experience in robotic surgery were recruited: novices (zero procedures), intermediates (1-50), experienced (> 50). Five experienced surgeons rated five exercises on the da Vinci Skills Simulator. Participants were tested using the five exercises. Participants were invited back 3 times and completed a total of 10 attempts per exercise. The outcome was the average simulator performance score for the 5 exercises. 32 participants from 5 surgical specialties were included. 38 participants completed all 4 sessions. A moderate correlation between the average total score and robotic experience was identified for the first attempt (Spearman r = 0.58; p = 0.0004). A difference in average total score was observed between novices and intermediates [median score 61% (IQR 52-66) vs. 83% (IQR 75-91), adjusted p < 0.0001], as well as novices and experienced [median score 61% (IQR 52-66) vs. 80 (IQR 69-85), adjusted p = 0.002]. All three groups improved their performance between the 1st and 10th attempts (p < 0.00). This study describes validity evidence for a virtual-reality simulator for basic robotic surgical skills, which can be used for assessment of basic competency and as a training tool. However, more validity evidence is needed before it can be used for certification or high-stakes assessment.
The Development of English and Mathematics Self-Efficacy: A Latent Growth Curve Analysis
ERIC Educational Resources Information Center
Phan, Huy P.
2012-01-01
Empirical research has provided evidence supporting the validation and prediction of 4 major sources of self-efficacy: enactive performance accomplishments, vicarious experiences, verbal persuasion, and emotional states. Other research studies have also attested to the importance and potency of self-efficacy in academic learning and achievement.…
The Co-Curricular Record: Enhancing a Postsecondary Education
ERIC Educational Resources Information Center
Elias, Kimberly; Drea, Catherine
2013-01-01
This article reports on the co-curricular record program (CCR) that is created by colleges and universities in Canada to help students engage in activities which will enhance their academic performance, personal development and well-being. It examines the validation of the CCR experience in an official document, opportunity of the students to…
2015-03-20
performed a series of bid trials where they reported their willingness-to-pay for each of 30 snack food items; potential trial realization was...willingness-to-pay for each of 30 snack food items; potential trial realization was conducted at the end of the experiment via an auction procedure
Effects of an Intelligent Web-Based English Instruction System on Students' Academic Performance
ERIC Educational Resources Information Center
Jia, J.; Chen, Y.; Ding, Z.; Bai, Y.; Yang, B.; Li, M.; Qi, J.
2013-01-01
This research conducted quasi-experiments in four middle schools to evaluate the long-term effects of an intelligent web-based English instruction system, Computer Simulation in Educational Communication (CSIEC), on students' academic attainment. The analysis of regular examination scores and vocabulary test validates the positive impact of CSIEC,…
Bostelmann, Friederike; Hammer, Hans R.; Ortensi, Javier; ...
2015-12-30
Within the framework of the IAEA Coordinated Research Project on HTGR Uncertainty Analysis in Modeling, criticality calculations of the Very High Temperature Critical Assembly experiment were performed as the validation reference to the prismatic MHTGR-350 lattice calculations. Criticality measurements performed at several temperature points at this Japanese graphite-moderated facility were recently included in the International Handbook of Evaluated Reactor Physics Benchmark Experiments, and represent one of the few data sets available for the validation of HTGR lattice physics. Here, this work compares VHTRC criticality simulations utilizing the Monte Carlo codes Serpent and SCALE/KENO-VI. Reasonable agreement was found between Serpent andmore » KENO-VI, but only the use of the latest ENDF cross section library release, namely the ENDF/B-VII.1 library, led to an improved match with the measured data. Furthermore, the fourth beta release of SCALE 6.2/KENO-VI showed significant improvements from the current SCALE 6.1.2 version, compared to the experimental values and Serpent.« less
NASA Astrophysics Data System (ADS)
Piechna, A.; Cieślicki, K.; Lombarski, L.; Ciszek, B.
2015-02-01
Arterial walls are a multilayer structures with nonlinear material characteristics. Furthermore, residual stresses exist in unloaded state (zero-pressure condition) and they affect arterial behavior. To investigate these phenomena a number of theoretical and numerical studies were performed, however no experimental validation was proposed and realized yet. We cannot get rid of residual stresses without damaging the arterial segment. In this paper we propose a novel experiment to validate a numerical model of artery with residual stresses. The inspiration for our study originates from experiments made by Dobrin on dogs' arteries (1999). We applied the idea of turning the artery inside out. After such an operation the sequence of layer is reversed and the residual stresses are re-ordered. We performed several pressure-inflation tests on human Common Carotid Arteries (CCA) in normal and inverted configurations. The nonlinear responses of arterial behavior were obtained and compared to the numerical model. Computer simulations were carried out using the commercial software which applied the finite element method (FEM). Then, these results were discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dou, T; Ruan, D; Heinrich, M
2016-06-15
Purpose: To obtain a functional relationship that calibrates the lung tissue density change under free breathing conditions through correlating Jacobian values to the Hounsfield units. Methods: Free-breathing lung computed tomography images were acquired using a fast helical CT protocol, where 25 scans were acquired per patient. Using a state-of-the-art deformable registration algorithm, a set of the deformation vector fields (DVF) was generated to provide spatial mapping from the reference image geometry to the other free-breathing scans. These DVFs were used to generate Jacobian maps, which estimate voxelwise volume change. Subsequently, the set of 25 corresponding Jacobian and voxel intensity inmore » Hounsfield units (HU) were collected and linear regression was performed based on the mass conservation relationship to correlate the volume change to density change. Based on the resulting fitting coefficients, the tissues were classified into parenchymal (Type I), vascular (Type II), and soft tissue (Type III) types. These coefficients modeled the voxelwise density variation during quiet breathing. The accuracy of the proposed method was assessed using mean absolute difference in HU between the CT scan intensities and the model predicted values. In addition, validation experiments employing a leave-five-out method were performed to evaluate the model accuracy. Results: The computed mean model errors were 23.30±9.54 HU, 29.31±10.67 HU, and 35.56±20.56 HU, respectively, for regions I, II, and III, respectively. The cross validation experiments averaged over 100 trials had mean errors of 30.02 ± 1.67 HU over the entire lung. These mean values were comparable with the estimated CT image background noise. Conclusion: The reported validation experiment statistics confirmed the lung density modeling during free breathing. The proposed technique was general and could be applied to a wide range of problem scenarios where accurate dynamic lung density information is needed. This work was supported in part by NIH R01 CA0096679.« less
Analysis and Ground Testing for Validation of the Inflatable Sunshield in Space (ISIS) Experiment
NASA Technical Reports Server (NTRS)
Lienard, Sebastien; Johnston, John; Adams, Mike; Stanley, Diane; Alfano, Jean-Pierre; Romanacci, Paolo
2000-01-01
The Next Generation Space Telescope (NGST) design requires a large sunshield to protect the large aperture mirror and instrument module from constant solar exposure at its L2 orbit. The structural dynamics of the sunshield must be modeled in order to predict disturbances to the observatory attitude control system and gauge effects on the line of site jitter. Models of large, non-linear membrane systems are not well understood and have not been successfully demonstrated. To answer questions about sunshield dynamic behavior and demonstrate controlled deployment, the NGST project is flying a Pathfinder experiment, the Inflatable Sunshield in Space (ISIS). This paper discusses in detail the modeling and ground-testing efforts performed at the Goddard Space Flight Center to: validate analytical tools for characterizing the dynamic behavior of the deployed sunshield, qualify the experiment for the Space Shuttle, and verify the functionality of the system. Included in the discussion will be test parameters, test setups, problems encountered, and test results.
NASA Astrophysics Data System (ADS)
Mattsson, Thomas R.
2011-11-01
Significant progress has over the last few years been made in high energy density physics (HEDP) by executing high-precision multi-Mbar experiments and performing first-principles simulations for elements ranging from carbon [1] to xenon [2]. The properties of water under HEDP conditions are of particular importance in planetary science due to the existence of ice-giants like Neptune and Uranus. Modeling the two planets, as well as water-rich exoplanets, requires knowing the equation of state (EOS), the pressure as a function of density and temperature, of water with high accuracy. Although extensive density functional theory (DFT) simulations have been performed for water under planetary conditions [3] experimental validation has been lacking. Accessing thermodynamic states along planetary isentropes in dynamic compression experiments is challenging because the principal Hugoniot follows a significantly different path in the phase diagram. In this talk, we present experimental data for dynamic compression of water up to 700 GPa, including in a regime of the phase-diagram intersected by the Neptune isentrope and water-rich models for the exoplanet GJ436b. The data was obtained on the Z-accelerator at Sandia National Laboratories by performing magnetically accelerated flyer plate impact experiments measuring both the shock and re-shock in the sample. The high accuracy makes it possible for the data to be used for detailed model validation: the results validate first principles based thermodynamics as a reliable foundation for planetary modeling and confirm the fine effect of including nuclear quantum effects on the shock pressure. Sandia National Laboratories is a multiprogram laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract No. DE-AC04-94AL85000. [4pt] [1] M.D. Knudson, D.H. Dolan, and M.P. Desjarlais, SCIENCE 322, 1822 (2008).[0pt] [2] S. Root, et al., Phys. Rev. Lett. 105, 085501 (2010).[0pt] [3] M. French, et al., Phys. Rev. B 79, 054107 (2009).
Modified Moral Distress Scale (MDS-11): Validation Study Among Italian Nurses.
Badolamenti, Sondra; Fida, Roberto; Biagioli, Valentina; Caruso, Rosario; Zaghini, Francesco; Sili, Alessandro; Rea, Teresa
2017-01-01
Moral distress (MD) has significant implications on individual and organizational health. However there is a lack of an instrument to assess it among Italian nurses. The main aim of this study was to validate a brief instrument to assess MD, developed from the Corley's Moral Distress Scale (MDS). The modified MDS scale was subjected to content and cultural validity. The scale was administered to 347 nurses. Psychometric analysis were performed to assess construct validity. The scale consists of 11 items, investigating MD in nursing practice in different clinical settings. The dimensionality of the scale was investigated through exploratory factor analysis (EFA), which showed a two-dimensional structure labeled futility and potential damage. The futility refers to feelings of powerlessness and ineffectiveness in some clinical situations; the potential damage dimension captures feelings of powerlessness when nurses are forced to tolerate or perform perceived abusive clinical proceedings. Nurses who experienced higher MD, were more lilely to experience burnout. The modified MDS showed good psychometric properties, and it is valid and reliable for assessing moral distress among Italian nurses. Hence, the modified MDS allows to monitor the distress experienced by nurses and it is an important contribution to the scientific community and all those dealing with well-being of health workers.
Validating Inertial Confinement Fusion (ICF) predictive capability using perturbed capsules
NASA Astrophysics Data System (ADS)
Schmitt, Mark; Magelssen, Glenn; Tregillis, Ian; Hsu, Scott; Bradley, Paul; Dodd, Evan; Cobble, James; Flippo, Kirk; Offerman, Dustin; Obrey, Kimberly; Wang, Yi-Ming; Watt, Robert; Wilke, Mark; Wysocki, Frederick; Batha, Steven
2009-11-01
Achieving ignition on NIF is a monumental step on the path toward utilizing fusion as a controlled energy source. Obtaining robust ignition requires accurate ICF models to predict the degradation of ignition caused by heterogeneities in capsule construction and irradiation. LANL has embarked on a project to induce controlled defects in capsules to validate our ability to predict their effects on fusion burn. These efforts include the validation of feature-driven hydrodynamics and mix in a convergent geometry. This capability is needed to determine the performance of capsules imploded under less-than-optimum conditions on future IFE facilities. LANL's recently initiated Defect Implosion Experiments (DIME) conducted at Rochester's Omega facility are providing input for these efforts. Recent simulation and experimental results will be shown.
Electrotactile Feedback Improves Performance and Facilitates Learning in the Routine Grasping Task.
Isaković, Milica; Belić, Minja; Štrbac, Matija; Popović, Igor; Došen, Strahinja; Farina, Dario; Keller, Thierry
2016-06-13
Aim of this study was to investigate the feasibility of electrotactile feedback in closed loop training of force control during the routine grasping task. The feedback was provided using an array electrode and a simple six-level spatial coding, and the experiment was conducted in three amputee subjects. The psychometric tests confirmed that the subjects could perceive and interpret the electrotactile feedback with a high success rate. The subjects performed the routine grasping task comprising 4 blocks of 60 grasping trials. In each trial, the subjects employed feedforward control to close the hand and produce the desired grasping force (four levels). First (baseline) and the last (validation) session were performed in open loop, while the second and the third session (training) included electrotactile feedback. The obtained results confirmed that using the feedback improved the accuracy and precision of the force control. In addition, the subjects performed significantly better in the validation vs. baseline session, therefore suggesting that electrotactile feedback can be used for learning and training of myoelectric control.
NASA Astrophysics Data System (ADS)
Serevina, V.; Muliyati, D.
2018-05-01
This research aims to develop students’ performance assessment instrument based on scientific approach is valid and reliable in assessing the performance of students on basic physics lab of Simple Harmonic Motion (SHM). This study uses the ADDIE consisting of stages: Analyze, Design, Development, Implementation, and Evaluation. The student performance assessment developed can be used to measure students’ skills in observing, asking, conducting experiments, associating and communicate experimental results that are the ‘5M’ stages in a scientific approach. Each grain of assessment in the instrument is validated by the instrument expert and the evaluation with the result of all points of assessment shall be eligible to be used with a 100% eligibility percentage. The instrument is then tested for the quality of construction, material, and language by panel (lecturer) with the result: 85% or very good instrument construction aspect, material aspect 87.5% or very good, and language aspect 83% or very good. For small group trial obtained instrument reliability level of 0.878 or is in the high category, where r-table is 0.707. For large group trial obtained instrument reliability level of 0.889 or is in the high category, where r-table is 0.320. Instruments declared valid and reliable for 5% significance level. Based on the result of this research, it can be concluded that the student performance appraisal instrument based on the developed scientific approach is declared valid and reliable to be used in assessing student skill in SHM experimental activity.
Validation of computer simulation training for esophagogastroduodenoscopy: Pilot study.
Sedlack, Robert E
2007-08-01
Little is known regarding the value of esophagogastroduodenoscopy (EGD) simulators in education. The purpose of the present paper was to validate the use of computer simulation in novice EGD training. In phase 1, expert endoscopists evaluated various aspects of simulation fidelity as compared to live endoscopy. Additionally, computer-recorded performance metrics were assessed by comparing the recorded scores from users of three different experience levels. In phase 2, the transfer of simulation-acquired skills to the clinical setting was assessed in a two-group, randomized pilot study. The setting was a large gastroenterology (GI) Fellowship training program; in phase 1, 21 subjects (seven expert, intermediate and novice endoscopist), made up the three experience groups. In phase 2, eight novice GI fellows were involved in the two-group, randomized portion of the study examining the transfer of simulation skills to the clinical setting. During the initial validation phase, each of the 21 subjects completed two standardized EDG scenarios on a computer simulator and their performance scores were recorded for seven parameters. Following this, staff participants completed a questionnaire evaluating various aspects of the simulator's fidelity. Finally, four novice GI fellows were randomly assigned to receive 6 h of simulator-augmented training (SAT group) in EGD prior to beginning 1 month of patient-based EGD training. The remaining fellows experienced 1 month of patient-based training alone (PBT group). Results of the seven measured performance parameters were compared between three groups of varying experience using a Wilcoxon ranked sum test. The staffs' simulator fidelity survey used a 7-point Likert scale (1, very unrealistic; 4, neutral; 7, very realistic) for each of the parameters examined. During the second phase of this study, supervising staff rated both SAT and PBT fellows' patient-based performance daily. Scoring in each skill was completed using a 7-point Likert scale (1, strongly disagree; 4, neutral; 7, strongly agree). Median scores were compared between groups using the Wilcoxon ranked sum test. Staff evaluations of fidelity found that only two of the parameters examined (anatomy and scope maneuverability) had a significant degree of realism. The remaining areas were felt to be limited in their fidelity. Of the computer-recorded performance scores, only the novice group could be reliably identified from the other two experience groups. In the clinical application phase, the median Patient Discomfort ratings were superior in the PBT group (6; interquartile range [IQR], 5-6) as compared to the SAT group (5; IQR, 4-6; P = 0.015). PBT fellows' ratings were also superior in Sedation, Patient Discomfort, Independence and Competence during various phases of the evaluation. At no point were SAT fellows rated higher than the PBT group in any of the parameters examined. This EGD simulator has limitations to the degree of fidelity and can differentiate only novice endoscopists from other levels of experience. Finally, skills learned during EGD simulation training do not appear to translate well into patient-based endoscopy skills. These findings suggest against a key element of validity for the use of this computer simulator in novice EGD training.
Scherrer, Stephen R; Rideout, Brendan P; Giorli, Giacomo; Nosal, Eva-Marie; Weng, Kevin C
2018-01-01
Passive acoustic telemetry using coded transmitter tags and stationary receivers is a popular method for tracking movements of aquatic animals. Understanding the performance of these systems is important in array design and in analysis. Close proximity detection interference (CPDI) is a condition where receivers fail to reliably detect tag transmissions. CPDI generally occurs when the tag and receiver are near one another in acoustically reverberant settings. Here we confirm transmission multipaths reflected off the environment arriving at a receiver with sufficient delay relative to the direct signal cause CPDI. We propose a ray-propagation based model to estimate the arrival of energy via multipaths to predict CPDI occurrence, and we show how deeper deployments are particularly susceptible. A series of experiments were designed to develop and validate our model. Deep (300 m) and shallow (25 m) ranging experiments were conducted using Vemco V13 acoustic tags and VR2-W receivers. Probabilistic modeling of hourly detections was used to estimate the average distance a tag could be detected. A mechanistic model for predicting the arrival time of multipaths was developed using parameters from these experiments to calculate the direct and multipath path lengths. This model was retroactively applied to the previous ranging experiments to validate CPDI observations. Two additional experiments were designed to validate predictions of CPDI with respect to combinations of deployment depth and distance. Playback of recorded tags in a tank environment was used to confirm multipaths arriving after the receiver's blanking interval cause CPDI effects. Analysis of empirical data estimated the average maximum detection radius (AMDR), the farthest distance at which 95% of tag transmissions went undetected by receivers, was between 840 and 846 m for the deep ranging experiment across all factor permutations. From these results, CPDI was estimated within a 276.5 m radius of the receiver. These empirical estimations were consistent with mechanistic model predictions. CPDI affected detection at distances closer than 259-326 m from receivers. AMDR determined from the shallow ranging experiment was between 278 and 290 m with CPDI neither predicted nor observed. Results of validation experiments were consistent with mechanistic model predictions. Finally, we were able to predict detection/nondetection with 95.7% accuracy using the mechanistic model's criterion when simulating transmissions with and without multipaths. Close proximity detection interference results from combinations of depth and distance that produce reflected signals arriving after a receiver's blanking interval has ended. Deployment scenarios resulting in CPDI can be predicted with the proposed mechanistic model. For deeper deployments, sea-surface reflections can produce CPDI conditions, resulting in transmission rejection, regardless of the reflective properties of the seafloor.
Beyond associations: Do implicit beliefs play a role in smoking addiction?
Tibboel, Helen; De Houwer, Jan; Dirix, Nicolas; Spruyt, Adriaan
2017-01-01
Influential dual-system models of addiction suggest that an automatic system that is associative and habitual promotes drug use, whereas a controlled system that is propositional and rational inhibits drug use. It is assumed that effects on the Implicit Association Test (IAT) reflect the automatic processes that guide drug seeking. However, results have been inconsistent, challenging: (1) the validity of addiction IATs; and (2) the assumption that the automatic system contains only simple associative information. We aimed to further test the validity of IATs that are used within this field of research using an experimental design. Second, we introduced a new procedure aimed at examining the automatic activation of complex propositional knowledge, the Relational Responding Task (RRT) and examine the validity of RRT effects in the context of smoking. In two experiments, smokers performed two different tasks: an approach/avoid IAT and a liking IAT in Experiment 1, and a smoking urges RRT and a valence IAT in Experiment 2. Smokers were tested once immediately after smoking and once after 10 hours of nicotine-deprivation. None of the IAT scores were affected by the deprivation manipulation. RRT scores revealed a stronger implicit desire for smoking in the deprivation condition compared to the satiation condition. IATs that are currently used to assess automatic processes in addiction have serious drawbacks. Furthermore, the automatic system may contain not only associations but complex drug-related beliefs as well. The RRT may be a useful and valid tool to examine these beliefs.
NASA Astrophysics Data System (ADS)
Class, G.; Meyder, R.; Stratmanns, E.
1985-12-01
The large data base for validation and development of computer codes for two-phase flow, generated at the COSIMA facility, is reviewed. The aim of COSIMA is to simulate the hydraulic, thermal, and mechanical conditions in the subchannel and the cladding of fuel rods in pressurized water reactors during the blowout phase of a loss of coolant accident. In terms of fuel rod behavior, it is found that during blowout under realistic conditions only small strains are reached. For cladding rupture extremely high rod internal pressures are necessary. The behavior of fuel rod simulators and the effect of thermocouples attached to the cladding outer surface are clarified. Calculations performed with the codes RELAP and DRUFAN show satisfactory agreement with experiments. This can be improved by updating the phase separation models in the codes.
Grants4Targets - an innovative approach to translate ideas from basic research into novel drugs.
Lessl, Monika; Schoepe, Stefanie; Sommer, Anette; Schneider, Martin; Asadullah, Khusru
2011-04-01
Collaborations between industry and academia are steadily gaining importance. To combine expertises Bayer Healthcare has set up a novel open innovation approach called Grants4Targets. Ideas on novel drug targets can easily be submitted to http://www.grants4targets.com. After a review process, grants are provided to perform focused experiments to further validate the proposed targets. In addition to financial support specific know-how on target validation and drug discovery is provided. Experienced scientists are nominated as project partners and, depending on the project, tools or specific models are provided. Around 280 applications have been received and 41 projects granted. According to our experience, this type of bridging fund combined with joint efforts provides a valuable tool to foster drug discovery collaborations. Copyright © 2010 Elsevier Ltd. All rights reserved.
Testing the Construct Validity of a Virtual Reality Hip Arthroscopy Simulator.
Khanduja, Vikas; Lawrence, John E; Audenaert, Emmanuel
2017-03-01
To test the construct validity of the hip diagnostics module of a virtual reality hip arthroscopy simulator. Nineteen orthopaedic surgeons performed a simulated arthroscopic examination of a healthy hip joint using a 70° arthroscope in the supine position. Surgeons were categorized as either expert (those who had performed 250 hip arthroscopies or more) or novice (those who had performed fewer than this). Twenty-one specific targets were visualized within the central and peripheral compartments; 9 via the anterior portal, 9 via the anterolateral portal, and 3 via the posterolateral portal. This was immediately followed by a task testing basic probe examination of the joint in which a series of 8 targets were probed via the anterolateral portal. During the tasks, the surgeon's performance was evaluated by the simulator using a set of predefined metrics including task duration, number of soft tissue and bone collisions, and distance travelled by instruments. No repeat attempts at the tasks were permitted. Construct validity was then evaluated by comparing novice and expert group performance metrics over the 2 tasks using the Mann-Whitney test, with a P value of less than .05 considered significant. On the visualization task, the expert group outperformed the novice group on time taken (P = .0003), number of collisions with soft tissue (P = .001), number of collisions with bone (P = .002), and distance travelled by the arthroscope (P = .02). On the probe examination, the 2 groups differed only in the time taken to complete the task (P = .025) with no significant difference in other metrics. Increased experience in hip arthroscopy was reflected by significantly better performance on the virtual reality simulator across 2 tasks, supporting its construct validity. This study validates a virtual reality hip arthroscopy simulator and supports its potential for developing basic arthroscopic skills. Level III. Copyright © 2016 Arthroscopy Association of North America. All rights reserved.
2013-03-01
operation. 2.1.2 Canine model The canine experiment (n ¼ 1) was performed as a validation of the correlation of visible reflectance imaging measurements...http://spiedl.org/terms with actual blood oxygenation. The canine laparotomy, as part of an animal protocol approved by the Institutional Animal Care and...All data analysis was performed using algorithms and software written in-house using the programming languages Matlab and IDL/ ENVI (ITT Visual
Psychomotor Vigilance Self Test on ISS (Reaction Self Test on Expeditions 21 and 22)
NASA Technical Reports Server (NTRS)
Dinges, David F.; Mollicone, Daniel; Ecker, Adrian
2009-01-01
The experiment addresses the following high-priority NASA Risk Gaps in the Behavioral Health and Performance (BHP) area: 1) Identify brief, valid objective measures of changes in cognitive functions during spaceflight that astronauts can use with minimal burden. 2) Find a practical objective aid for astronauts to quickly identify and manage the effects of fatigue (from sleep loss, circadian disruptions, workload and other factors) on their performance during space flight.
2016-06-30
PERFORMING ORGANIZATION Texas A&M Eng ineering Experiment Station (TEES) REPORT NUMBER 1470 William D. Fitch Parkway M1601473/ 505170-00001/2...0.7% strain when the dilatational energy density reaches the experimentally determined critical value (0.2 MPa). 3 To validate whether the critical...implementation against experimental results in terms of the crack path shape. We perform convergence studies in terms of the non local region size for
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karnowski, Thomas Paul; Giancardo, Luca; Li, Yaquin
2013-01-01
Automated retina image analysis has reached a high level of maturity in recent years, and thus the question of how validation is performed in these systems is beginning to grow in importance. One application of retina image analysis is in telemedicine, where an automated system could enable the automated detection of diabetic retinopathy and other eye diseases as a low-cost method for broad-based screening. In this work we discuss our experiences in developing a telemedical network for retina image analysis, including our progression from a manual diagnosis network to a more fully automated one. We pay special attention to howmore » validations of our algorithm steps are performed, both using data from the telemedicine network and other public databases.« less
Yusof, Zamros Y M; Jaafar, Nasruddin
2012-06-08
The study aimed to develop and test a Malay version of the Child-OIDP index, evaluate its psychometric properties and report on the prevalence of oral impacts on eight daily performances in a sample of 11-12 year old Malaysian schoolchildren. The Child-OIDP index was translated from English into Malay. The Malay version was tested for reliability and validity on a non-random sample of 132, 11-12 year old schoolchildren from two urban schools in Kuala Lumpur. Psychometric analysis of the Malay Child-OIDP involved face, content, criterion and construct validity tests as well as internal and test-retest reliability. Non-parametric statistical methods were used to assess relationships between Child-OIDP scores and other subjective outcome measures. The standardised Cronbach's alpha was 0.80 and the weighted Kappa was 0.84 (intraclass correlation = 0.79). The index showed significant associations with different subjective measures viz. perceived satisfaction with mouth, perceived needs for dental treatment, perceived oral health status and toothache experience in the previous 3 months (p < 0.05). Two-thirds (66.7%) of the sample had oral impacts affecting one or more performances in the past 3 months. The three most frequently affected performances were cleaning teeth (36.4%), eating foods (34.8%) and maintaining emotional stability (26.5%). In terms of severity of impact, the ability to relax was most severely affected by their oral conditions, followed by ability to socialise and doing schoolwork. Almost three-quarters (74.2%) of schoolchildren with oral impacts had up to three performances affected by their oral conditions. This study indicated that the Malay Child-OIDP index is a valid and reliable instrument to measure the oral impacts of daily performances in 11-12 year old urban schoolchildren in Malaysia.
Assessing Performance in Shoulder Arthroscopy: The Imperial Global Arthroscopy Rating Scale (IGARS).
Bayona, Sofia; Akhtar, Kash; Gupte, Chinmay; Emery, Roger J H; Dodds, Alexander L; Bello, Fernando
2014-07-02
Surgical training is undergoing major changes with reduced resident work hours and an increasing focus on patient safety and surgical aptitude. The aim of this study was to create a valid, reliable method for an assessment of arthroscopic skills that is independent of time and place and is designed for both real and simulated settings. The validity of the scale was tested using a virtual reality shoulder arthroscopy simulator. The study consisted of two parts. In the first part, an Imperial Global Arthroscopy Rating Scale for assessing technical performance was developed using a Delphi method. Application of this scale required installing a dual-camera system to synchronously record the simulator screen and body movements of trainees to allow an assessment that is independent of time and place. The scale includes aspects such as efficient portal positioning, angles of instrument insertion, proficiency in handling the arthroscope and adequately manipulating the camera, and triangulation skills. In the second part of the study, a validation study was conducted. Two experienced arthroscopic surgeons, blinded to the identities and experience of the participants, each assessed forty-nine subjects performing three different tests using the Imperial Global Arthroscopy Rating Scale. Results were analyzed using two-way analysis of variance with measures of absolute agreement. The intraclass correlation coefficient was calculated for each test to assess inter-rater reliability. The scale demonstrated high internal consistency (Cronbach alpha, 0.918). The intraclass correlation coefficient demonstrated high agreement between the assessors: 0.91 (p < 0.001). Construct validity was evaluated using Kruskal-Wallis one-way analysis of variance (chi-square test, 29.826; p < 0.001), demonstrating that the Imperial Global Arthroscopy Rating Scale distinguishes significantly between subjects with different levels of experience utilizing a virtual reality simulator. The Imperial Global Arthroscopy Rating Scale has a high internal consistency and excellent inter-rater reliability and offers an approach for assessing technical performance in basic arthroscopy on a virtual reality simulator. The Imperial Global Arthroscopy Rating Scale provides detailed information on surgical skills. Although it requires further validation in the operating room, this scale, which is independent of time and place, offers a robust and reliable method for assessing arthroscopic technical skills. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.
Using Speech Recall in Hearing Aid Fitting and Outcome Evaluation Under Ecological Test Conditions.
Lunner, Thomas; Rudner, Mary; Rosenbom, Tove; Ågren, Jessica; Ng, Elaine Hoi Ning
2016-01-01
In adaptive Speech Reception Threshold (SRT) tests used in the audiological clinic, speech is presented at signal to noise ratios (SNRs) that are lower than those generally encountered in real-life communication situations. At higher, ecologically valid SNRs, however, SRTs are insensitive to changes in hearing aid signal processing that may be of benefit to listeners who are hard of hearing. Previous studies conducted in Swedish using the Sentence-final Word Identification and Recall test (SWIR) have indicated that at such SNRs, the ability to recall spoken words may be a more informative measure. In the present study, a Danish version of SWIR, known as the Sentence-final Word Identification and Recall Test in a New Language (SWIRL) was introduced and evaluated in two experiments. The objective of experiment 1 was to determine if the Swedish results demonstrating benefit from noise reduction signal processing for hearing aid wearers could be replicated in 25 Danish participants with mild to moderate symmetrical sensorineural hearing loss. The objective of experiment 2 was to compare direct-drive and skin-drive transmission in 16 Danish users of bone-anchored hearing aids with conductive hearing loss or mixed sensorineural and conductive hearing loss. In experiment 1, performance on SWIRL improved when hearing aid noise reduction was used, replicating the Swedish results and generalizing them across languages. In experiment 2, performance on SWIRL was better for direct-drive compared with skin-drive transmission conditions. These findings indicate that spoken word recall can be used to identify benefits from hearing aid signal processing at ecologically valid, positive SNRs where SRTs are insensitive.
Oh, Deborah M; Kim, Joshua M; Garcia, Raymond E; Krilowicz, Beverly L
2005-06-01
There is increasing pressure, both from institutions central to the national scientific mission and from regional and national accrediting agencies, on natural sciences faculty to move beyond course examinations as measures of student performance and to instead develop and use reliable and valid authentic assessment measures for both individual courses and for degree-granting programs. We report here on a capstone course developed by two natural sciences departments, Biological Sciences and Chemistry/Biochemistry, which engages students in an important culminating experience, requiring synthesis of skills and knowledge developed throughout the program while providing the departments with important assessment information for use in program improvement. The student work products produced in the course, a written grant proposal, and an oral summary of the proposal, provide a rich source of data regarding student performance on an authentic assessment task. The validity and reliability of the instruments and the resulting student performance data were demonstrated by collaborative review by content experts and a variety of statistical measures of interrater reliability, including percentage agreement, intraclass correlations, and generalizability coefficients. The high interrater reliability reported when the assessment instruments were used for the first time by a group of external evaluators suggests that the assessment process and instruments reported here will be easily adopted by other natural science faculty.
Propulsion Risk Reduction Activities for Non-Toxic Cryogenic Propulsion
NASA Technical Reports Server (NTRS)
Smith, Timothy D.; Klem, Mark D.; Fisher, Kenneth
2010-01-01
The Propulsion and Cryogenics Advanced Development (PCAD) Project s primary objective is to develop propulsion system technologies for non-toxic or "green" propellants. The PCAD project focuses on the development of non-toxic propulsion technologies needed to provide necessary data and relevant experience to support informed decisions on implementation of non-toxic propellants for space missions. Implementation of non-toxic propellants in high performance propulsion systems offers NASA an opportunity to consider other options than current hypergolic propellants. The PCAD Project is emphasizing technology efforts in reaction control system (RCS) thruster designs, ascent main engines (AME), and descent main engines (DME). PCAD has a series of tasks and contracts to conduct risk reduction and/or retirement activities to demonstrate that non-toxic cryogenic propellants can be a feasible option for space missions. Work has focused on 1) reducing the risk of liquid oxygen/liquid methane ignition, demonstrating the key enabling technologies, and validating performance levels for reaction control engines for use on descent and ascent stages; 2) demonstrating the key enabling technologies and validating performance levels for liquid oxygen/liquid methane ascent engines; and 3) demonstrating the key enabling technologies and validating performance levels for deep throttling liquid oxygen/liquid hydrogen descent engines. The progress of these risk reduction and/or retirement activities will be presented.
Propulsion Risk Reduction Activities for Nontoxic Cryogenic Propulsion
NASA Technical Reports Server (NTRS)
Smith, Timothy D.; Klem, Mark D.; Fisher, Kenneth L.
2010-01-01
The Propulsion and Cryogenics Advanced Development (PCAD) Project s primary objective is to develop propulsion system technologies for nontoxic or "green" propellants. The PCAD project focuses on the development of nontoxic propulsion technologies needed to provide necessary data and relevant experience to support informed decisions on implementation of nontoxic propellants for space missions. Implementation of nontoxic propellants in high performance propulsion systems offers NASA an opportunity to consider other options than current hypergolic propellants. The PCAD Project is emphasizing technology efforts in reaction control system (RCS) thruster designs, ascent main engines (AME), and descent main engines (DME). PCAD has a series of tasks and contracts to conduct risk reduction and/or retirement activities to demonstrate that nontoxic cryogenic propellants can be a feasible option for space missions. Work has focused on 1) reducing the risk of liquid oxygen/liquid methane ignition, demonstrating the key enabling technologies, and validating performance levels for reaction control engines for use on descent and ascent stages; 2) demonstrating the key enabling technologies and validating performance levels for liquid oxygen/liquid methane ascent engines; and 3) demonstrating the key enabling technologies and validating performance levels for deep throttling liquid oxygen/liquid hydrogen descent engines. The progress of these risk reduction and/or retirement activities will be presented.
Simplified Summative Temporal Bone Dissection Scale Demonstrates Equivalence to Existing Measures.
Pisa, Justyn; Gousseau, Michael; Mowat, Stephanie; Westerberg, Brian; Unger, Bert; Hochman, Jordan B
2018-01-01
Emphasis on patient safety has created the need for quality assessment of fundamental surgical skills. Existing temporal bone rating scales are laborious, subject to evaluator fatigue, and contain inconsistencies when conferring points. To address these deficiencies, a novel binary assessment tool was designed and validated against a well-established rating scale. Residents completed a mastoidectomy with posterior tympanotomy on identical 3D-printed temporal bone models. Four neurotologists evaluated each specimen using a validated scale (Welling) and a newly developed "CanadaWest" scale, with scoring repeated after a 4-week interval. Nineteen participants were clustered into junior, intermediate, and senior cohorts. An ANOVA found significant differences between performance of the junior-intermediate and junior-senior cohorts for both Welling and CanadaWest scales ( P < .05). Neither scale found a significant difference between intermediate-senior resident performance ( P > .05). Cohen's kappa found strong intrarater reliability (0.711) with a high degree of interrater reliability of (0.858) for the CanadaWest scale, similar to scores on the Welling scale of (0.713) and (0.917), respectively. The CanadaWest scale was facile and delineated performance by experience level with strong intrarater reliability. Comparable to the validated Welling Scale, it distinguished junior from senior trainees but was challenged in differentiating intermediate and senior trainee performance.
Analysis of link performance for the FOENEX laser communications system
NASA Astrophysics Data System (ADS)
Juarez, Juan C.; Young, David W.; Venkat, Radha A.; Brown, David M.; Brown, Andrea M.; Oberc, Rachel L.; Sluz, Joseph E.; Pike, H. Alan; Stotts, Larry B.
2012-06-01
A series of experiments were conducted to validate the performance of the free-space optical communications (FSOC) subsystem under DARPA's FOENEX program. Over six days, bidirectional links at ranges of 10 and 17 km were characterized during different periods of the day to evaluate link performance. This paper will present the test configuration, evaluate performance of the FSOC subsystem against a variety of characterization approaches, and discuss the impact of the results, particularly with regards to the optical terminals. Finally, this paper will summarize the impact of turbulence conditions on the FSOC subsystem and present methods for estimating performance under different link distances and turbulence conditions.
Balestrieri, M; Giaroli, G; Mazzi, M; Bellantuono, C
2006-05-01
Several studies indicate that subjective experience toward antipsychotic drugs (APs) in schizophrenic patients is a key factor in ensuring a smooth recovery from the illness. The principal aim of this study was to establish the psychometric performance of the Subjective Well-being Under Neuroleptic (SWN) scale in its Italian version and to assess, through the SWN scale, the subjective experience of stabilized psychotic outpatients in maintenance with APs. The original short version of SWN, consisting of 20 items, was back translated, and a focus group was also conducted to better improve the comprehension of the scale. The results showed a good performance of the Italian version of the SWN as documented by the internal consistency (Cronbach's alpha; 0.85). A satisfactory subjective experience was reported in the sample of schizophrenic outpatients interviewed (SWN mean total score: 84.95, SD: 17.5). The performance of the SWN scale in the present study was very similar to that reported by Naber et al. in the original validation study. Large multi-center studies are needed to better establish differences in the subjective experience of schizophrenic patients treated with first- and second-generation APs.
Flight control system design factors for applying automated testing techniques
NASA Technical Reports Server (NTRS)
Sitz, Joel R.; Vernon, Todd H.
1990-01-01
Automated validation of flight-critical embedded systems is being done at ARC Dryden Flight Research Facility. The automated testing techniques are being used to perform closed-loop validation of man-rated flight control systems. The principal design features and operational experiences of the X-29 forward-swept-wing aircraft and F-18 High Alpha Research Vehicle (HARV) automated test systems are discussed. Operationally applying automated testing techniques has accentuated flight control system features that either help or hinder the application of these techniques. The paper also discusses flight control system features which foster the use of automated testing techniques.
NASA Technical Reports Server (NTRS)
Trauger, John
2008-01-01
Topics include and overview, science objectives, study objectives, coronagraph types, metrics, ACCESS observatory, laboratory validations, and summary. Individual slides examine ACCESS engineering approach, ACCESS gamut of coronagraph types, coronagraph metrics, ACCESS Discovery Space, coronagraph optical layout, wavefront control on the "level playing field", deformable mirror development for HCIT, laboratory testbed demonstrations, high contract imaging with the HCIT, laboratory coronagraph contrast and stability, model validation and performance predictions, HCIT coronagraph optical layout, Lyot coronagraph on the HCIT, pupil mapping (PIAA), shaped pupils, and vortex phase mask experiments on the HCIT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Marshall, William BJ J
In the course of criticality code validation, outlier cases are frequently encountered. Historically, the causes of these unexpected results could be diagnosed only through comparison with other similar cases or through the known presence of a unique component of the critical experiment. The sensitivity and uncertainty (S/U) analysis tools available in the SCALE 6.1 code system provide a much broader range of options to examine underlying causes of outlier cases. This paper presents some case studies performed as a part of the recent validation of the KENO codes in SCALE 6.1 using S/U tools to examine potential causes of biases.
Numerical Validation of Chemical Compositional Model for Wettability Alteration Processes
NASA Astrophysics Data System (ADS)
Bekbauov, Bakhbergen; Berdyshev, Abdumauvlen; Baishemirov, Zharasbek; Bau, Domenico
2017-12-01
Chemical compositional simulation of enhanced oil recovery and surfactant enhanced aquifer remediation processes is a complex task that involves solving dozens of equations for all grid blocks representing a reservoir. In the present work, we perform a numerical validation of the newly developed mathematical formulation which satisfies the conservation laws of mass and energy and allows applying a sequential solution approach to solve the governing equations separately and implicitly. Through its application to the numerical experiment using a wettability alteration model and comparisons with existing chemical compositional model's numerical results, the new model has proven to be practical, reliable and stable.
Spectral cumulus parameterization based on cloud-resolving model
NASA Astrophysics Data System (ADS)
Baba, Yuya
2018-02-01
We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.
Herzog, Annabel; Voigt, Katharina; Meyer, Björn; Wollburg, Eileen; Weinmann, Nina; Langs, Gernot; Löwe, Bernd
2015-06-01
The new DSM-5 Somatic Symptom Disorder (SSD) emphasizes the importance of psychological processes related to somatic symptoms in patients with somatoform disorders. To address this, the Somatic Symptoms Experiences Questionnaire (SSEQ), the first self-report scale that assesses a broad range of psychological and interactional characteristics relevant to patients with a somatoform disorder or SSD, was developed. This prospective study was conducted to validate the SSEQ. The 15-item SSEQ was administered along with a battery of self-report questionnaires to psychosomatic inpatients. Patients were assessed with the Structured Clinical Interview for DSM-IV to confirm a somatoform, depressive, or anxiety disorder. Confirmatory factor analyses, tests of internal consistency and tests of validity were performed. Patients (n=262) with a mean age of 43.4 years, 60.3% women, were included in the analyses. The previously observed four-factor model was replicated and internal consistency was good (Cronbach's α=.90). Patients with a somatoform disorder had significantly higher scores on the SSEQ (t=4.24, p<.001) than patients with a depressive/anxiety disorder. Construct validity was shown by high correlations with other instruments measuring related constructs. Hierarchical multiple regression analyses showed that the questionnaire predicted health-related quality of life. Sensitivity to change was shown by significantly higher effect sizes of the SSEQ change scores for improved patients than for patients without improvement. The SSEQ appears to be a reliable, valid, and efficient instrument to assess a broad range of psychological and interactional features related to the experience of somatic symptoms. Copyright © 2015 Elsevier Inc. All rights reserved.
A Complex Systems Approach to Causal Discovery in Psychiatry.
Saxe, Glenn N; Statnikov, Alexander; Fenyo, David; Ren, Jiwen; Li, Zhiguo; Prasad, Meera; Wall, Dennis; Bergman, Nora; Briggs, Ernestine C; Aliferis, Constantin
2016-01-01
Conventional research methodologies and data analytic approaches in psychiatric research are unable to reliably infer causal relations without experimental designs, or to make inferences about the functional properties of the complex systems in which psychiatric disorders are embedded. This article describes a series of studies to validate a novel hybrid computational approach--the Complex Systems-Causal Network (CS-CN) method-designed to integrate causal discovery within a complex systems framework for psychiatric research. The CS-CN method was first applied to an existing dataset on psychopathology in 163 children hospitalized with injuries (validation study). Next, it was applied to a much larger dataset of traumatized children (replication study). Finally, the CS-CN method was applied in a controlled experiment using a 'gold standard' dataset for causal discovery and compared with other methods for accurately detecting causal variables (resimulation controlled experiment). The CS-CN method successfully detected a causal network of 111 variables and 167 bivariate relations in the initial validation study. This causal network had well-defined adaptive properties and a set of variables was found that disproportionally contributed to these properties. Modeling the removal of these variables resulted in significant loss of adaptive properties. The CS-CN method was successfully applied in the replication study and performed better than traditional statistical methods, and similarly to state-of-the-art causal discovery algorithms in the causal detection experiment. The CS-CN method was validated, replicated, and yielded both novel and previously validated findings related to risk factors and potential treatments of psychiatric disorders. The novel approach yields both fine-grain (micro) and high-level (macro) insights and thus represents a promising approach for complex systems-oriented research in psychiatry.
Gillen, Sonja; Gröne, Jörn; Knödgen, Fritz; Wolf, Petra; Meyer, Michael; Friess, Helmut; Buhr, Heinz-Johannes; Ritz, Jörg-Peter; Feussner, Hubertus; Lehmann, Kai S
2012-08-01
Natural orifice translumenal endoscopic surgery (NOTES) is a new surgical concept that requires training before it is introduced into clinical practice. The endoscopic–laparoscopic interdisciplinary training entity (ELITE) is a training model for NOTES interventions. The latest research has concentrated on new materials for organs with realistic optical and haptic characteristics and the possibility of high-frequency dissection. This study aimed to assess both the ELITE model in a surgical training course and the construct validity of a newly developed NOTES appendectomy scenario. The 70 attendees of the 2010 Practical Course for Visceral Surgery (Warnemuende, Germany) took part in the study and performed a NOTES appendectomy via a transsigmoidal access. The primary end point was the total time required for the appendectomy, including retrieval of the appendix. Subjective evaluation of the model was performed using a questionnaire. Subgroups were analyzed according to laparoscopic and endoscopic experience. The participants with endoscopic or laparoscopic experience completed the task significantly faster than the inexperienced participants (p = 0.009 and 0.019, respectively). Endoscopic experience was the strongest influencing factor, whereas laparoscopic experience had limited impact on the participants with previous endoscopic experience. As shown by the findings, 87.3% of the participants stated that the ELITE model was suitable for the NOTES training scenario, and 88.7% found the newly developed model anatomically realistic. This study was able to establish face and construct validity for the ELITE model with a large group of surgeons. The ELITE model seems to be well suited for the training of NOTES as a new surgical technique in an established gastrointestinal surgery skills course.
Design and validation of an intelligent wheelchair towards a clinically-functional outcome
2013-01-01
Background Many people with mobility impairments, who require the use of powered wheelchairs, have difficulty completing basic maneuvering tasks during their activities of daily living (ADL). In order to provide assistance to this population, robotic and intelligent system technologies have been used to design an intelligent powered wheelchair (IPW). This paper provides a comprehensive overview of the design and validation of the IPW. Methods The main contributions of this work are three-fold. First, we present a software architecture for robot navigation and control in constrained spaces. Second, we describe a decision-theoretic approach for achieving robust speech-based control of the intelligent wheelchair. Third, we present an evaluation protocol motivated by a meaningful clinical outcome, in the form of the Robotic Wheelchair Skills Test (RWST). This allows us to perform a thorough characterization of the performance and safety of the system, involving 17 test subjects (8 non-PW users, 9 regular PW users), 32 complete RWST sessions, 25 total hours of testing, and 9 kilometers of total running distance. Results User tests with the RWST show that the navigation architecture reduced collisions by more than 60% compared to other recent intelligent wheelchair platforms. On the tasks of the RWST, we measured an average decrease of 4% in performance score and 3% in safety score (not statistically significant), compared to the scores obtained with conventional driving model. This analysis was performed with regular users that had over 6 years of wheelchair driving experience, compared to approximately one half-hour of training with the autonomous mode. Conclusions The platform tested in these experiments is among the most experimentally validated robotic wheelchairs in realistic contexts. The results establish that proficient powered wheelchair users can achieve the same level of performance with the intelligent command mode, as with the conventional command mode. PMID:23773851
Is there inter-procedural transfer of skills in intraocular surgery? A randomized controlled trial.
Thomsen, Ann Sofia Skou; Kiilgaard, Jens Folke; la Cour, Morten; Brydges, Ryan; Konge, Lars
2017-12-01
To investigate how experience in simulated cataract surgery impacts and transfers to the learning curves for novices in vitreoretinal surgery. Twelve ophthalmology residents without previous experience in intraocular surgery were randomized to (1) intensive training in cataract surgery on a virtual-reality simulator until passing a test with predefined validity evidence (cataract trainees) or to (2) no cataract surgery training (novices). Possible skill transfer was assessed using a test consisting of all 11 vitreoretinal modules on the EyeSi virtual-reality simulator. All participants repeated the test of vitreoretinal surgical skills until their performance curve plateaued. Three experienced vitreoretinal surgeons also performed the test to establish validity evidence. Analysis with independent samples t-tests was performed. The vitreoretinal test on the EyeSi simulator demonstrated evidence of validity, given statistically significant differences in mean test scores for the first repetition; experienced surgeons scored higher than novices (p = 0.023) and cataract trainees (p = 0.003). Internal consistency for the 11 modules of the test was acceptable (Cronbach's α = 0.73). Our findings did not indicate a transfer effect with no significant differences found between cataract trainees and novices in their starting scores (mean ± SD 381 ± 129 points versus 455 ± 82 points, p = 0.262), time to reach maximum performance level (10.7 ± 3.0 hr versus 8.7 ± 2.8 hr, p = 0.265), or maximum scores (785 ± 162 points versus 805 ± 73 points, p = 0.791). Pretraining in cataract surgery did not demonstrate any measurable effect on vitreoretinal procedural performance. The results of this study indicate that we should not anticipate extensive transfer of surgical skills when planning training programmes in intraocular surgery. © 2017 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Feasibility study for wax deposition imaging in oil pipelines by PGNAA technique.
Cheng, Can; Jia, Wenbao; Hei, Daqian; Wei, Zhiyong; Wang, Hongtao
2017-10-01
Wax deposition in pipelines is a crucial problem in the oil industry. A method based on the prompt gamma-ray neutron activation analysis technique was applied to reconstruct the image of wax deposition in oil pipelines. The 2.223MeV hydrogen capture gamma rays were used to reconstruct the wax deposition image. To validate the method, both MCNP simulation and experiments were performed for wax deposited with a maximum thickness of 20cm. The performance of the method was simulated using the MCNP code. The experiment was conducted with a 252 Cf neutron source and a LaBr 3 : Ce detector. A good correspondence between the simulations and the experiments was observed. The results obtained indicate that the present approach is efficient for wax deposition imaging in oil pipelines. Copyright © 2017 Elsevier Ltd. All rights reserved.
Statistical modeling of software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1992-01-01
This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.
Pattarino, Franco; Piepel, Greg; Rinaldi, Maurizio
2018-03-03
A paper by Foglio Bonda et al. published previously in this journal (2016, Vol. 83, pp. 175–183) discussed the use of mixture experiment design and modeling methods to study how the proportions of three components in an extemporaneous oral suspension affected the mean diameter of drug particles (Z ave). The three components were itraconazole (ITZ), Tween 20 (TW20), and Methocel® E5 (E5). This commentary addresses some errors and other issues in the previous paper, and also discusses an improved model relating proportions of ITZ, TW20, and E5 to Z ave. The improved model contains six of the 10 terms inmore » the full-cubic mixture model, which were selected using a different cross-validation procedure than used in the previous paper. In conclusion, compared to the four-term model presented in the previous paper, the improved model fit the data better, had excellent cross-validation performance, and the predicted Z ave of a validation point was within model uncertainty of the measured value.« less
Pattarino, Franco; Piepel, Greg; Rinaldi, Maurizio
2018-05-30
A paper by Foglio Bonda et al. published previously in this journal (2016, Vol. 83, pp. 175-183) discussed the use of mixture experiment design and modeling methods to study how the proportions of three components in an extemporaneous oral suspension affected the mean diameter of drug particles (Z ave ). The three components were itraconazole (ITZ), Tween 20 (TW20), and Methocel® E5 (E5). This commentary addresses some errors and other issues in the previous paper, and also discusses an improved model relating proportions of ITZ, TW20, and E5 to Z ave . The improved model contains six of the 10 terms in the full-cubic mixture model, which were selected using a different cross-validation procedure than used in the previous paper. Compared to the four-term model presented in the previous paper, the improved model fit the data better, had excellent cross-validation performance, and the predicted Z ave of a validation point was within model uncertainty of the measured value. Copyright © 2018 Elsevier B.V. All rights reserved.
Wiesmann, Veit; Bergler, Matthias; Palmisano, Ralf; Prinzen, Martin; Franz, Daniela; Wittenberg, Thomas
2017-03-18
Manual assessment and evaluation of fluorescent micrograph cell experiments is time-consuming and tedious. Automated segmentation pipelines can ensure efficient and reproducible evaluation and analysis with constant high quality for all images of an experiment. Such cell segmentation approaches are usually validated and rated in comparison to manually annotated micrographs. Nevertheless, manual annotations are prone to errors and display inter- and intra-observer variability which influence the validation results of automated cell segmentation pipelines. We present a new approach to simulate fluorescent cell micrographs that provides an objective ground truth for the validation of cell segmentation methods. The cell simulation was evaluated twofold: (1) An expert observer study shows that the proposed approach generates realistic fluorescent cell micrograph simulations. (2) An automated segmentation pipeline on the simulated fluorescent cell micrographs reproduces segmentation performances of that pipeline on real fluorescent cell micrographs. The proposed simulation approach produces realistic fluorescent cell micrographs with corresponding ground truth. The simulated data is suited to evaluate image segmentation pipelines more efficiently and reproducibly than it is possible on manually annotated real micrographs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pattarino, Franco; Piepel, Greg; Rinaldi, Maurizio
A paper by Foglio Bonda et al. published previously in this journal (2016, Vol. 83, pp. 175–183) discussed the use of mixture experiment design and modeling methods to study how the proportions of three components in an extemporaneous oral suspension affected the mean diameter of drug particles (Z ave). The three components were itraconazole (ITZ), Tween 20 (TW20), and Methocel® E5 (E5). This commentary addresses some errors and other issues in the previous paper, and also discusses an improved model relating proportions of ITZ, TW20, and E5 to Z ave. The improved model contains six of the 10 terms inmore » the full-cubic mixture model, which were selected using a different cross-validation procedure than used in the previous paper. In conclusion, compared to the four-term model presented in the previous paper, the improved model fit the data better, had excellent cross-validation performance, and the predicted Z ave of a validation point was within model uncertainty of the measured value.« less
Wang, Xiao Jun; Wong, Ching Man; Chan, Alexandre
2016-09-01
The Functional Assessment of Cancer Therapy-Neutropenia (FACT-N) is a neutropenia-specific questionnaire to assess patients' health-related quality of life. This study aimed to examine the psychometric properties of FACT-N among cancer patients with chemotherapy-induced neutropenia (CIN). This prospective, cross-sectional study included multiethnic Asian cancer patients. Patients completed the questionnaires within seven days after diagnosed with CIN. Eligible patients completed either the English or Chinese version of the EuroQol 5-Dimensions (EQ-5D) and the FACT-N once, according to their language preference. The reliability was evaluated by using Cronbach alpha (α). The known-group validity was assessed based on patient's Eastern Cooperative Oncology Group performance status, neutropenia grade, and experience of fever. The convergent validity was evaluated by contrasting the FACT-N subscales with the EQ-5D domains. Multiple linear regression models were performed to compare the FACT-N total scores between the two language versions. A total of 276 eligible patients (200 English speaking and 76 Chinese speaking) were included in this study. Internal consistencies within the FACT-N subscales were satisfactory (Cronbach α = 0.71-0.85), except for the flu-like symptoms subscale (Cronbach α = 0.67). For known-group validity, the FACT-N total score could differentiate patients according to their Eastern Cooperative Oncology Group performance status (P < 0.001), neutropenia grade (P = 0.028), and experience of fever (P < 0.001). The correlations between the FACT-N subscales and their hypothesized constructs in EQ-5D domains were weak to moderate (|r| = 0.15-0.44). The measurement equivalence between the English and Chinese versions was established for the FACT-N total scores. The FACT-N is a valid and reliable instrument to be used in clinical practice to evaluate the health-related quality of life among multiethnic Asian patients with CIN. Copyright © 2016. Published by Elsevier Inc.
Design of the EO-1 Pulsed Plasma Thruster Attitude Control Experiment
NASA Technical Reports Server (NTRS)
Zakrzwski, Charles; Sanneman, Paul; Hunt, Teresa; Blackman, Kathie; Bauer, Frank H. (Technical Monitor)
2001-01-01
The Pulsed Plasma Thruster (PPT) Experiment on the Earth Observing 1 (EO-1) spacecraft has been designed to demonstrate the capability of a new generation PPT to perform spacecraft attitude control. The PPT is a small, self-contained pulsed electromagnetic Propulsion system capable of delivering high specific impulse (900-1200 s), very small impulse bits (10-1000 micro N-s) at low average power (less than 1 to 100 W). EO-1 has a single PPT that can produce torque in either the positive or negative pitch direction. For the PPT in-flight experiment, the pitch reaction wheel will be replaced by the PPT during nominal EO-1 nadir pointing. A PPT specific proportional-integral-derivative (PID) control algorithm was developed for the experiment. High fidelity simulations of the spacecraft attitude control capability using the PPT were conducted. The simulations, which showed PPT control performance within acceptable mission limits, will be used as the benchmark for on-orbit performance. The flight validation will demonstrate the ability of the PPT to provide precision pointing resolution. response and stability as an attitude control actuator.
Expression signature as a biomarker for prenatal diagnosis of trisomy 21.
Volk, Marija; Maver, Aleš; Lovrečić, Luca; Juvan, Peter; Peterlin, Borut
2013-01-01
A universal biomarker panel with the potential to predict high-risk pregnancies or adverse pregnancy outcome does not exist. Transcriptome analysis is a powerful tool to capture differentially expressed genes (DEG), which can be used as biomarker-diagnostic-predictive tool for various conditions in prenatal setting. In search of biomarker set for predicting high-risk pregnancies, we performed global expression profiling to find DEG in Ts21. Subsequently, we performed targeted validation and diagnostic performance evaluation on a larger group of case and control samples. Initially, transcriptomic profiles of 10 cultivated amniocyte samples with Ts21 and 9 with normal euploid constitution were determined using expression microarrays. Datasets from Ts21 transcriptomic studies from GEO repository were incorporated. DEG were discovered using linear regression modelling and validated using RT-PCR quantification on an independent sample of 16 cases with Ts21 and 32 controls. The classification performance of Ts21 status based on expression profiling was performed using supervised machine learning algorithm and evaluated using a leave-one-out cross validation approach. Global gene expression profiling has revealed significant expression changes between normal and Ts21 samples, which in combination with data from previously performed Ts21 transcriptomic studies, were used to generate a multi-gene biomarker for Ts21, comprising of 9 gene expression profiles. In addition to biomarker's high performance in discriminating samples from global expression profiling, we were also able to show its discriminatory performance on a larger sample set 2, validated using RT-PCR experiment (AUC=0.97), while its performance on data from previously published studies reached discriminatory AUC values of 1.00. Our results show that transcriptomic changes might potentially be used to discriminate trisomy of chromosome 21 in the prenatal setting. As expressional alterations reflect both, causal and reactive cellular mechanisms, transcriptomic changes may thus have future potential in the diagnosis of a wide array of heterogeneous diseases that result from genetic disturbances.
USING CFD TO ANALYZE NUCLEAR SYSTEMS BEHAVIOR: DEFINING THE VALIDATION REQUIREMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard Schultz
2012-09-01
A recommended protocol to formulate numeric tool specifications and validation needs in concert with practices accepted by regulatory agencies for advanced reactors is described. The protocol is based on the plant type and perceived transient and accident envelopes that translates to boundary conditions for a process that gives the: (a) key phenomena and figures-of-merit which must be analyzed to ensure that the advanced plant can be licensed, (b) specification of the numeric tool capabilities necessary to perform the required analyses—including bounding calculational uncertainties, and (c) specification of the validation matrices and experiments--including the desired validation data. The result of applyingmore » the process enables a complete program to be defined, including costs, for creating and benchmarking transient and accident analysis methods for advanced reactors. By following a process that is in concert with regulatory agency licensing requirements from the start to finish, based on historical acceptance of past licensing submittals, the methods derived and validated have a high probability of regulatory agency acceptance.« less
Parametric study of different contributors to tumor thermal profile
NASA Astrophysics Data System (ADS)
Tepper, Michal; Gannot, Israel
2014-03-01
Treating cancer is one of the major challenges of modern medicine. There is great interest in assessing tumor development in in vivo animal and human models, as well as in in vitro experiments. Existing methods are either limited by cost and availability or by their low accuracy and reproducibility. Thermography holds the potential of being a noninvasive, low-cost, irradiative and easy-to-use method for tumor monitoring. Tumors can be detected in thermal images due to their relatively higher or lower temperature compared to the temperature of the healthy skin surrounding them. Extensive research is performed to show the validity of thermography as an efficient method for tumor detection and the possibility of extracting tumor properties from thermal images, showing promising results. However, deducing from one type of experiment to others is difficult due to the differences in tumor properties, especially between different types of tumors or different species. There is a need in a research linking different types of tumor experiments. In this research, parametric analysis of possible contributors to tumor thermal profiles was performed. The effect of tumor geometric, physical and thermal properties was studied, both independently and together, in phantom model experiments and computer simulations. Theoretical and experimental results were cross-correlated to validate the models used and increase the accuracy of simulated complex tumor models. The contribution of different parameters in various tumor scenarios was estimated and the implication of these differences on the observed thermal profiles was studied. The correlation between animal and human models is discussed.
Development of a questionnaire for assessing the childbirth experience (QACE).
Carquillat, Pierre; Vendittelli, Françoise; Perneger, Thomas; Guittier, Marie-Julia
2017-08-30
Due to its potential impact on women's psychological health, assessing perceptions of their childbirth experience is important. The aim of this study was to develop a multidimensional self-reporting questionnaire to evaluate the childbirth experience. Factors influencing the childbirth experience were identified from a literature review and the results of a previous qualitative study. A total of 25 items were combined from existing instruments or were created de novo. A draft version was pilot tested for face validity with 30 women and submitted for evaluation of its construct validity to 477 primiparous women at one-month post-partum. The recruitment took place in two obstetric clinics from Swiss and French university hospitals. To evaluate the content validity, we compared item responses to general childbirth experience assessments on a numeric, 0 to 10 rating scale. We dichotomized two group assessment scores: "0 to 7" and "8 to 10". We performed an exploratory factor analysis to identify underlying dimensions. In total, 291 women completed the questionnaire (response rate = 61%). The responses to 22 items were statistically significant between the 0 to 7 and 8 to 10 groups for the general childbirth experience assessments. An exploratory factor analysis yielded four sub-scales, which were labelled "relationship with staff" (4 items), "emotional status" (3 items), "first moments with the new born," (3 items) and "feelings at one month postpartum" (3 items). All 4 scales had satisfactory internal consistency levels (alpha coefficients from 0.70 to 0.85). The full 25-item version can be used to analyse each item by itself, and the short 4-dimension version can be scored to summarize the general assessment of the childbirth experience. The Questionnaire for Assessing the Childbirth Experience (QACE) could be useful as a screening instrument to identify women with negative childbirth experiences. It can be used as both a research instrument in its short version and a questionnaire for use in clinical practice in its full version.
Sirota, Miroslav; Juanchich, Marie
2018-03-27
The Cognitive Reflection Test, measuring intuition inhibition and cognitive reflection, has become extremely popular because it reliably predicts reasoning performance, decision-making, and beliefs. Across studies, the response format of CRT items sometimes differs, based on the assumed construct equivalence of tests with open-ended versus multiple-choice items (the equivalence hypothesis). Evidence and theoretical reasons, however, suggest that the cognitive processes measured by these response formats and their associated performances might differ (the nonequivalence hypothesis). We tested the two hypotheses experimentally by assessing the performance in tests with different response formats and by comparing their predictive and construct validity. In a between-subjects experiment (n = 452), participants answered stem-equivalent CRT items in an open-ended, a two-option, or a four-option response format and then completed tasks on belief bias, denominator neglect, and paranormal beliefs (benchmark indicators of predictive validity), as well as on actively open-minded thinking and numeracy (benchmark indicators of construct validity). We found no significant differences between the three response formats in the numbers of correct responses, the numbers of intuitive responses (with the exception of the two-option version, which had a higher number than the other tests), and the correlational patterns of the indicators of predictive and construct validity. All three test versions were similarly reliable, but the multiple-choice formats were completed more quickly. We speculate that the specific nature of the CRT items helps build construct equivalence among the different response formats. We recommend using the validated multiple-choice version of the CRT presented here, particularly the four-option CRT, for practical and methodological reasons. Supplementary materials and data are available at https://osf.io/mzhyc/ .
Assessing Arthroscopic Skills Using Wireless Elbow-Worn Motion Sensors.
Kirby, Georgina S J; Guyver, Paul; Strickland, Louise; Alvand, Abtin; Yang, Guang-Zhong; Hargrove, Caroline; Lo, Benny P L; Rees, Jonathan L
2015-07-01
Assessment of surgical skill is a critical component of surgical training. Approaches to assessment remain predominantly subjective, although more objective measures such as Global Rating Scales are in use. This study aimed to validate the use of elbow-worn, wireless, miniaturized motion sensors to assess the technical skill of trainees performing arthroscopic procedures in a simulated environment. Thirty participants were divided into three groups on the basis of their surgical experience: novices (n = 15), intermediates (n = 10), and experts (n = 5). All participants performed three standardized tasks on an arthroscopic virtual reality simulator while wearing wireless wrist and elbow motion sensors. Video output was recorded and a validated Global Rating Scale was used to assess performance; dexterity metrics were recorded from the simulator. Finally, live motion data were recorded via Bluetooth from the wireless wrist and elbow motion sensors and custom algorithms produced an arthroscopic performance score. Construct validity was demonstrated for all tasks, with Global Rating Scale scores and virtual reality output metrics showing significant differences between novices, intermediates, and experts (p < 0.001). The correlation of the virtual reality path length to the number of hand movements calculated from the wireless sensors was very high (p < 0.001). A comparison of the arthroscopic performance score levels with virtual reality output metrics also showed highly significant differences (p < 0.01). Comparisons of the arthroscopic performance score levels with the Global Rating Scale scores showed strong and highly significant correlations (p < 0.001) for both sensor locations, but those of the elbow-worn sensors were stronger and more significant (p < 0.001) than those of the wrist-worn sensors. A new wireless assessment of surgical performance system for objective assessment of surgical skills has proven valid for assessing arthroscopic skills. The elbow-worn sensors were shown to achieve an accurate assessment of surgical dexterity and performance. The validation of an entirely objective assessment of arthroscopic skill with wireless elbow-worn motion sensors introduces, for the first time, a feasible assessment system for the live operating theater with the added potential to be applied to other surgical and interventional specialties. Copyright © 2015 by The Journal of Bone and Joint Surgery, Incorporated.
Wongpakaran, Tinakon; Wongpakaran, Nahathai
2012-01-01
This study seeks to investigate the psychometric properties of the short version of the revised 'Experience of Close Relationships' questionnaire, comparing non-clinical and clinical samples. In total 702 subjects participated in this study, of whom 531 were non-clinical participants and 171 were psychiatric patients. They completed the short version of the revised 'Experience of Close Relationships' questionnaire (ECR-R-18), the Perceived Stress Scale-10(PSS-10), the Rosenberg Self-Esteem Scale (RSES) and the UCLA Loneliness scale. A retest of the ECR-R-18 was then performed at four-week intervals. Then, confirmatory factor analyses were performed to test the validity of the new scale. The ECR-R-18 showed a fair to good internal consistency (α 0.77 to 0.87) for both samples, and the test-retest reliability was found to be satisfactory (ICC = 0.75). The anxiety sub-scale demonstrated concurrent validity with PSS-10 and RSES, while the avoidance sub-scale showed concurrent validity with the UCLA Loneliness Scale. Confirmatory factor analysis using method factors yielded two factors with an acceptable model fit for both groups. An invariance test revealed that the ECR-R-18 when used on the clinical group differed from when used with the non-clinical group. The ECR-R-18 questionnaire revealed an overall better level of fit than the original 36 item questionnaire, indicating its suitability for use with a broader group of samples, including clinical samples. The reliability of the ECR-R- 18 might be increased if a modified scoring system is used and if our suggestions with regard to future studies are followed up.
A model of clutter for complex, multivariate geospatial displays.
Lohrenz, Maura C; Trafton, J Gregory; Beck, R Melissa; Gendron, Marlin L
2009-02-01
A novel model of measuring clutter in complex geospatial displays was compared with human ratings of subjective clutter as a measure of convergent validity. The new model is called the color-clustering clutter (C3) model. Clutter is a known problem in displays of complex data and has been shown to affect target search performance. Previous clutter models are discussed and compared with the C3 model. Two experiments were performed. In Experiment 1, participants performed subjective clutter ratings on six classes of information visualizations. Empirical results were used to set two free parameters in the model. In Experiment 2, participants performed subjective clutter ratings on aeronautical charts. Both experiments compared and correlated empirical data to model predictions. The first experiment resulted in a .76 correlation between ratings and C3. The second experiment resulted in a .86 correlation, significantly better than results from a model developed by Rosenholtz et al. Outliers to our correlation suggest further improvements to C3. We suggest that (a) the C3 model is a good predictor of subjective impressions of clutter in geospatial displays, (b) geospatial clutter is a function of color density and saliency (primary C3 components), and (c) pattern analysis techniques could further improve C3. The C3 model could be used to improve the design of electronic geospatial displays by suggesting when a display will be too cluttered for its intended audience.
The Zero Boil-Off Tank Experiment Ground Testing and Verification of Fluid and Thermal Performance
NASA Technical Reports Server (NTRS)
Chato, David J.; Kassemi, Mohammad; Kahwaji, Michel; Kieckhafer, Alexander
2016-01-01
The Zero Boil-Off Technology (ZBOT) Experiment involves performing a small scale International Space Station (ISS) experiment to study tank pressurization and pressure control in microgravity. The ZBOT experiment consists of a vacuum jacketed test tank filled with an inert fluorocarbon simulant liquid. Heaters and thermo-electric coolers are used in conjunction with an axial jet mixer flow loop to study a range of thermal conditions within the tank. The objective is to provide a high quality database of low gravity fluid motions and thermal transients which will be used to validate Computational Fluid Dynamic (CFD) modeling. This CFD can then be used in turn to predict behavior in larger systems with cryogens. This paper will discuss the work that has been done to demonstrate that the ZBOT experiment is capable of performing the functions required to produce a meaningful and accurate results, prior to its launch to the International Space Station. Main systems discussed are expected to include the thermal control system, the optical imaging system, and the tank filling system.This work is sponsored by NASAs Human Exploration Mission Directorates Physical Sciences Research program.
ERIC Educational Resources Information Center
Ali, Shainna; Lambie, Glenn; Bloom, Zachary D.
2017-01-01
The Sexual Orientation Counselor Competency Scale (SOCCS), developed by Bidell in 2005, measures counselors' levels of skills, awareness, and knowledge in assisting lesbian, gay, or bisexual (LGB) clients. In an effort to gain an increased understanding of the construct validity of the SOCCS, researchers performed an exploratory factor analysis on…
ERIC Educational Resources Information Center
Fox, Mark C.; Ericsson, K. Anders; Best, Ryan
2011-01-01
Since its establishment, psychology has struggled to find valid methods for studying thoughts and subjective experiences. Thirty years ago, Ericsson and Simon (1980) proposed that participants can give concurrent verbal expression to their thoughts (think aloud) while completing tasks without changing objectively measurable performance (accuracy).…
ERIC Educational Resources Information Center
Hatala, John-Paul
2009-01-01
Any organization that is able to promote the importance of increased levels of social capital and individuals who can leverage and use the resources that exist within the network may experience higher levels of performance. This study sought to add to our knowledge about individuals' accessing social resources for the purpose of accomplishing…
Systems Concepts and Computer-Managed Instruction: An Implementation and Validation Study.
ERIC Educational Resources Information Center
Dick, Walter; Gallagher, Paul
The Florida State model of computer-managed instruction (CMI) differs from other such models in that it assumes a student will achieve his maximum performance level by interacting directly with the computer in order to evaluate his learning experience. In this system the computer plays the role of real-time diagnostician and prescriber for the…
A Study of a Super-Cooling Technique for Removal of Rubber from Solid-Rubber Tires.
environmental pollution . In answering these questions, an experiment is conducted to validate the concept and to determine liquid...is performed to compare the costs of the super-cooling technique with those of the brake drum lathe method of rubber removal. Safety and environmental pollution factors are also investigated and
Validation of minicams for measuring concentrations of chemical agent in environmental air
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menton, R.G.; Hayes, T.L.; Chou, Y.L.
1993-05-13
Environmental monitoring for chemical agents is necessary to ensure that notification and appropriate action will be taken in the, event that there is a release exceeding control limits of such agents into the workplace outside of engineering controls. Prior to implementing new analytical procedures for environmental monitoring, precision and accuracy (PA) tests are conducted to ensure that an agent monitoring system performs according to specified accuracy, precision, and sensitivity requirements. This testing not only establishes the accuracy and precision of the method, but also determines what factors can affect the method's performance. Performance measures that are particularly important in agentmore » monitoring include the Detection Limit (DL), Decision Limit (DC), Found Action Level (FAL), and the Target Action Level (TAL). PA experiments were performed at Battelle's Medical Research and Evaluation Facility (MREF) to validate the use of the miniature chemical agent monitoring system (MINICAMs) for measuring environmental air concentrations of sulfur mustard (HD). This presentation discusses the experimental and statistical approaches for characterizing the performance of MINICAMS for measuring HD in air.« less
Development and quality analysis of the Work Experience Measurement Scale (WEMS).
Nilsson, Petra; Bringsén, Asa; Andersson, H Ingemar; Ejlertsson, Göran
2010-01-01
Instruments related to work are commonly illuminated from an ill-health perspective. The need for a concise and useable instrument in workplace health promotion governed the aim of this paper which is to present the development process and quality assessment of the Work Experience Measurement Scale (WEMS). A survey, using a questionnaire based on established theories regarding work and health, and a focus group study were performed in hospital settings in 2005 and 2006 respectively. A Principal Component Analysis (PCA) was used to statistically develop a model, and focus group interviews were made to compare quantitative and qualitative results for convergence and corroboration. The PCA resulted in a six factor model of dimensions containing items regarding management, reorganization, internal work experience, pressure of time, autonomy and supportive working conditions. In the analysis of the focus group study three themes appeared and their underlying content was compared to, and matched, with the dimensions of the PCA. The reliability, shown by weighted kappa values, ranged from 0.36 to 0.71, and adequate Cronbach's Alpha values of the dimensions were all above 0.7. The study validity, indicated by discriminant validity, with correlation values that ranged from 0.10 to 0.39, in relation to the content validity appeared to be good when the theoretical content of the WEMS was compared to the content of similar instruments. The WEMS presents a multidimensional picture of work experience. Its theoretical base and the psychometric properties give support for applicability and offer a possibility to measure trends in the work experience over time in health care settings. One intention of the WEMS is to stimulate the ability of organizations and the employees themselves to take action on improving their work experience. The conciseness of the instrument is intended to increase its usability.
The EGS Collab Project: Stimulation Investigations for Geothermal Modeling Analysis and Validation
NASA Astrophysics Data System (ADS)
Blankenship, D.; Kneafsey, T. J.
2017-12-01
The US DOE's EGS Collab project team is establishing a suite of intermediate-scale ( 10-20 m) field test beds for coupled stimulation and interwell flow tests. The multiple national laboratory and university team is designing the tests to compare measured data to models to improve measurement and modeling toolsets available for use in field sites and investigations such as DOE's Frontier Observatory for Research in Geothermal Energy (FORGE) Project. Our tests will be well-controlled, in situexperiments focused on rock fracture behavior, seismicity, and permeability enhancement. Pre- and post-test modeling will allow for model prediction and validation. High-quality, high-resolution geophysical and other fracture characterization data will be collected, analyzed, and compared with models and field observations to further elucidate the basic relationships between stress, induced seismicity, and permeability enhancement. Coring through the stimulated zone after tests will provide fracture characteristics that can be compared to monitoring data and model predictions. We will also observe and quantify other key governing parameters that impact permeability, and attempt to understand how these parameters might change throughout the development and operation of an Enhanced Geothermal System (EGS) project with the goal of enabling commercial viability of EGS. The Collab team will perform three major experiments over the three-year project duration. Experiment 1, intended to investigate hydraulic fracturing, will be performed in the Sanford Underground Research Facility (SURF) at 4,850 feet depth and will build on kISMET Project findings. Experiment 2 will be designed to investigate hydroshearing. Experiment 3 will investigate changes in fracturing strategies and will be further specified as the project proceeds. The tests will provide quantitative insights into the nature of stimulation (e.g., hydraulic fracturing, hydroshearing, mixed-mode fracturing, thermal fracturing) in crystalline rock under reservoir-like stress conditions and generate high-quality, high-resolution, diverse data sets to be simulated allowing model validation. Monitoring techniques will also be evaluated under controlled conditions identifying technologies appropriate for deeper full-scale EGS sites.
Defect Induced Mix Experiments (DIME) for NIF
NASA Astrophysics Data System (ADS)
Schmitt, Mark; Bradley, Paul; Cobble, James; Hsu, Scott; Krasheninnikova, Natalia; Magelssen, Glenn; Murphy, Thomas; Obrey, Kimberly; Tregillis, Ian; Wysocki, Frederick
2011-10-01
Los Alamos National Laboratory will be performing FY12 NIF experiments using polar direct drive to measure the effects of high mode number defects on ICF implosion hydrodynamics and yield. The effect of equatorial groove features will be assessed using both x-ray backlighting and spectrally resolved imaging of higher-Z dopant layers in 2.2 mm diameter (30 microns thick) CH capsules using a multiple monochromatic imager (MMI). By placing thin, 2 micron thick, layers containing ~1.5% of either Ge or Se at different depths in the capsule, we will be able to characterize the mixing and heating of these layers in both perturbed and unperturbed regions of the capsule. Precursor experiments have been performed on Omega to validate these measurement methods using Ti and V layers. An overview of our current results from Omega and design efforts for NIF will be presented. Work performed by Los Alamos National Laboratory under contract DE-AC52-06NA25396 for the National Nuclear Security Administration of the U.S. Department of Energy.
NASA Astrophysics Data System (ADS)
Farinelli, R.; BESIII CGEM Group
2017-01-01
A new cylindrical GEM detector is under development to upgrade the tracking system of the BESIII experiment at the IHEP in Beijing. The new detector will replace the current inner drift chamber of the experiment in order to increase significantly the spatial resolution along the beam direction (σ_z ˜ 300 μ m) and to grant the performance of momentum resolution (σ_{p_t}/p_t ˜ 0.5% at 1GeV) and spatial resolution (σ_{xy} ˜ 130 μ m). A cylindrical prototype with the final detector dimensions has been built and the assembly procedure has been successfully validated. Moreover the performance of a 10 × 10 cm ^2 planar GEM has been studied inside a magnetic field by means of a beam test at CERN. The data have been analyzed using two different readout mode: the charge centroid (CC) and the micro time projection chamber ( μ TPC) method.
Matsuda, Tadashi; McDougall, Elspeth M; Ono, Yoshinari; Hattori, Ryohei; Baba, Shiro; Iwamura, Masatsugu; Terachi, Toshiro; Naito, Seiji; Clayman, Ralph V
2012-11-01
We studied the construct validity of the LapMentor, a virtual reality laparoscopic surgical simulator, and the correlation between the data collected on the LapMentor and the results of video assessment of real laparoscopic surgeries. Ninety-two urologists were tested on basic skill tasks No. 3 (SK3) to No. 8 (SK8) on the LapMentor. They were divided into three groups: Group A (n=25) had no experience with laparoscopic surgeries as a chief surgeon; group B (n=33) had <35 experiences; and group C (n=34) had ≥35 experiences. Group scores on the accuracy, efficacy, and time of the tasks were compared. Forty physicians with ≥20 experiences supplied unedited videotapes showing a laparoscopic nephrectomy or an adrenalectomy in its entirety, and the videos were assessed in a blinded fashion by expert referees. Correlations between the videotape score (VS) and the performances on the LapMentor were analyzed. Group C showed significantly better outcomes than group A in the accuracy (SK5) (P=0.013), efficacy (SK8) (P=0.014), or speed (SKs 3 and 8) (P=0.009 and P=0.002, respectively) of the performances of LapMentor. Group B showed significantly better outcomes than group A in the speed and efficacy of the performances in SK8 (P=0.011 and P=0.029, respectively). Analyses of motion analysis data of LapMentor demonstrated that smooth and ideal movement of instruments is more important than speed of the movement of instruments to achieve accurate performances in each task. Multiple linear regression analysis indicated that the average score of the accuracy in SK4, 5, and 8 had significant positive correlation with VS (P=0.01). This study demonstrated the construct and predictive validity of the LapMentor basic skill tasks, supporting their possible usefulness for the preclinical evaluation of laparoscopic skills.
Blagus, Rok; Lusa, Lara
2015-11-04
Prediction models are used in clinical research to develop rules that can be used to accurately predict the outcome of the patients based on some of their characteristics. They represent a valuable tool in the decision making process of clinicians and health policy makers, as they enable them to estimate the probability that patients have or will develop a disease, will respond to a treatment, or that their disease will recur. The interest devoted to prediction models in the biomedical community has been growing in the last few years. Often the data used to develop the prediction models are class-imbalanced as only few patients experience the event (and therefore belong to minority class). Prediction models developed using class-imbalanced data tend to achieve sub-optimal predictive accuracy in the minority class. This problem can be diminished by using sampling techniques aimed at balancing the class distribution. These techniques include under- and oversampling, where a fraction of the majority class samples are retained in the analysis or new samples from the minority class are generated. The correct assessment of how the prediction model is likely to perform on independent data is of crucial importance; in the absence of an independent data set, cross-validation is normally used. While the importance of correct cross-validation is well documented in the biomedical literature, the challenges posed by the joint use of sampling techniques and cross-validation have not been addressed. We show that care must be taken to ensure that cross-validation is performed correctly on sampled data, and that the risk of overestimating the predictive accuracy is greater when oversampling techniques are used. Examples based on the re-analysis of real datasets and simulation studies are provided. We identify some results from the biomedical literature where the incorrect cross-validation was performed, where we expect that the performance of oversampling techniques was heavily overestimated.
[Reliability and validity of Driving Anger Scale in professional drivers in China].
Li, Z; Yang, Y M; Zhang, C; Li, Y; Hu, J; Gao, L W; Zhou, Y X; Zhang, X J
2017-11-10
Objective: To assess the reliability and validity of the Chinese version of Driving Anger Scale (DAS) in professional drivers in China and provide a scientific basis for the application of the scale in drivers in China. Methods: Professional drivers, including taxi drivers, bus drivers, truck drivers and school bus drivers, were selected to complete the questionnaire. Cronbach's α and split-half reliability were calculated to evaluate the reliability of DAS, and content, contract, discriminant and convergent validity were performed to measure the validity of the scale. Results: The overall Cronbach's α of DAS was 0.934 and the split-half reliability was 0.874. The correlation coefficient of each subscale with the total scale was 0.639-0.922. The simplified version of DAS supported a presupposed six-factor structure, explaining 56.371% of the total variance revealed by exploratory factor analysis. The DAS had good convergent and discriminant validity, with the success rate of calibration experiment of 100%. Conclusion: DAS has a good reliability and validity in professional drivers in China, and the use of DAS is worth promoting in divers.
Assessing students' communication skills: validation of a global rating.
Scheffer, Simone; Muehlinghaus, Isabel; Froehmel, Annette; Ortwein, Heiderose
2008-12-01
Communication skills training is an accepted part of undergraduate medical programs nowadays. In addition to learning experiences its importance should be emphasised by performance-based assessment. As detailed checklists have been shown to be not well suited for the assessment of communication skills for different reasons, this study aimed to validate a global rating scale. A Canadian instrument was translated to German and adapted to assess students' communication skills during an end-of-semester-OSCE. Subjects were second and third year medical students at the reformed track of the Charité-Universitaetsmedizin Berlin. Different groups of raters were trained to assess students' communication skills using the global rating scale. Validity testing included concurrent validity and construct validity: Judgements of different groups of raters were compared to expert ratings as a defined gold standard. Furthermore, the amount of agreement between scores obtained with this global rating scale and a different instrument for assessing communication skills was determined. Results show that communication skills can be validly assessed by trained non-expert raters as well as standardised patients using this instrument.
Development of Testing Methodologies to Evaluate Postflight Locomotor Performance
NASA Technical Reports Server (NTRS)
Mulavara, A. P.; Peters, B. T.; Cohen, H. S.; Richards, J. T.; Miller, C. A.; Brady, R.; Warren, L. E.; Bloomberg, J. J.
2006-01-01
Crewmembers experience locomotor and postural instabilities during ambulation on Earth following their return from space flight. Gait training programs designed to facilitate recovery of locomotor function following a transition to a gravitational environment need to be accompanied by relevant assessment methodologies to evaluate their efficacy. The goal of this paper is to demonstrate the operational validity of two tests of locomotor function that were used to evaluate performance after long duration space flight missions on the International Space Station (ISS).
Small-Angle X-ray Scattering (SAXS) Instrument Performance and Validation Using Silver Nanoparticles
2016-12-01
Intercalibration of small-angle X- Ray and neutron-scattering data. Journal of Applied Crystallography . 1988;21:629–638. 7. Zhang F, Ilavsky J, Long GG...Materials Transactions A. 2009;41:1151–1158. 8. Kusz J, Bohm H. Performance of a confocal multilayer X-ray optic. Journal of Applied Crystallography ...Journal of Applied Crystallography . 2004;37:369–380. 10. Orthaber D, Bergmann A, Glatter O. SAXS experiments on absolute scale with Kratky systems using
Small Angle X ray Scattering (SAXS) Instrument Performance and Validation Using Silver Nanoparticles
2016-12-01
Intercalibration of small-angle X- Ray and neutron-scattering data. Journal of Applied Crystallography . 1988;21:629–638. 7. Zhang F, Ilavsky J, Long GG...Materials Transactions A. 2009;41:1151–1158. 8. Kusz J, Bohm H. Performance of a confocal multilayer X-ray optic. Journal of Applied Crystallography ...Journal of Applied Crystallography . 2004;37:369–380. 10. Orthaber D, Bergmann A, Glatter O. SAXS experiments on absolute scale with Kratky systems using
Music psychopathology. II. Assessment of musical expression.
Steinberg, R; Raith, L
1985-01-01
A short polarity profile which was well suited for the assessment of the musical expression of performances recorded from mentally ill patients and controls is described. 9 out of 12 polarities showed sufficient differentiating qualities, ranging from professional to poor amateur performances. Only 3 polarities had to be reformulated. The assessments of the 3 experts had a high interrater reliability and retest stability. The very significant correlation between the results of the experts and 50 independent subjects indicates the validity of the experiment.
Modeling of impulsive propellant reorientation
NASA Technical Reports Server (NTRS)
Hochstein, John I.; Patag, Alfredo E.; Chato, David J.
1988-01-01
The impulsive propellant reorientation process is modeled using the (Energy Calculations for Liquid Propellants in a Space Environment (ECLIPSE) code. A brief description of the process and the computational model is presented. Code validation is documented via comparison to experimentally derived data for small scale tanks. Predictions of reorientation performance are presented for two tanks designed for use in flight experiments and for a proposed full scale OTV tank. A new dimensionless parameter is developed to correlate reorientation performance in geometrically similar tanks. Its success is demonstrated.
Disbergen, Niels R.; Valente, Giancarlo; Formisano, Elia; Zatorre, Robert J.
2018-01-01
Polyphonic music listening well exemplifies processes typically involved in daily auditory scene analysis situations, relying on an interactive interplay between bottom-up and top-down processes. Most studies investigating scene analysis have used elementary auditory scenes, however real-world scene analysis is far more complex. In particular, music, contrary to most other natural auditory scenes, can be perceived by either integrating or, under attentive control, segregating sound streams, often carried by different instruments. One of the prominent bottom-up cues contributing to multi-instrument music perception is their timbre difference. In this work, we introduce and validate a novel paradigm designed to investigate, within naturalistic musical auditory scenes, attentive modulation as well as its interaction with bottom-up processes. Two psychophysical experiments are described, employing custom-composed two-voice polyphonic music pieces within a framework implementing a behavioral performance metric to validate listener instructions requiring either integration or segregation of scene elements. In Experiment 1, the listeners' locus of attention was switched between individual instruments or the aggregate (i.e., both instruments together), via a task requiring the detection of temporal modulations (i.e., triplets) incorporated within or across instruments. Subjects responded post-stimulus whether triplets were present in the to-be-attended instrument(s). Experiment 2 introduced the bottom-up manipulation by adding a three-level morphing of instrument timbre distance to the attentional framework. The task was designed to be used within neuroimaging paradigms; Experiment 2 was additionally validated behaviorally in the functional Magnetic Resonance Imaging (fMRI) environment. Experiment 1 subjects (N = 29, non-musicians) completed the task at high levels of accuracy, showing no group differences between any experimental conditions. Nineteen listeners also participated in Experiment 2, showing a main effect of instrument timbre distance, even though within attention-condition timbre-distance contrasts did not demonstrate any timbre effect. Correlation of overall scores with morph-distance effects, computed by subtracting the largest from the smallest timbre distance scores, showed an influence of general task difficulty on the timbre distance effect. Comparison of laboratory and fMRI data showed scanner noise had no adverse effect on task performance. These Experimental paradigms enable to study both bottom-up and top-down contributions to auditory stream segregation and integration within psychophysical and neuroimaging experiments. PMID:29563861
Simultaneous acquisition for T2 -T2 Exchange and T1 -T2 correlation NMR experiments
NASA Astrophysics Data System (ADS)
Montrazi, Elton T.; Lucas-Oliveira, Everton; Araujo-Ferreira, Arthur G.; Barsi-Andreeta, Mariane; Bonagamba, Tito J.
2018-04-01
The NMR measurements of longitudinal and transverse relaxation times and its multidimensional correlations provide useful information about molecular dynamics. However, these experiments are very time-consuming, and many researchers proposed faster experiments to reduce this issue. This paper presents a new way to simultaneously perform T2 -T2 Exchange and T1 -T2 correlation experiments by taking the advantage of the storage time and the two steps phase cycling used for running the relaxation exchange experiment. The data corresponding to each step is either summed or subtracted to produce the T2 -T2 and T1 -T2 data, enhancing the information obtained while maintaining the experiment duration. Comparing the results from this technique with traditional NMR experiments it was possible to validate the method.
Stenhouse, Rosie; Snowden, Austyn; Young, Jenny; Carver, Fiona; Carver, Hannah; Brown, Norrie
2016-08-01
Reports of poor nursing care have focused attention on values based selection of candidates onto nursing programmes. Values based selection lacks clarity and valid measures. Previous caring experience might lead to better care. Emotional intelligence (EI) might be associated with performance, is conceptualised and measurable. To examine the impact of 1) previous caring experience, 2) emotional intelligence 3) social connection scores on performance and retention in a cohort of first year nursing and midwifery students in Scotland. A longitudinal, quasi experimental design. Adult and mental health nursing, and midwifery programmes in a Scottish University. Adult, mental health and midwifery students (n=598) completed the Trait Emotional Intelligence Questionnaire-short form and Schutte's Emotional Intelligence Scale on entry to their programmes at a Scottish University, alongside demographic and previous caring experience data. Social connection was calculated from a subset of questions identified within the TEIQue-SF in a prior factor and Rasch analysis. Student performance was calculated as the mean mark across the year. Withdrawal data were gathered. 598 students completed baseline measures. 315 students declared previous caring experience, 277 not. An independent-samples t-test identified that those without previous caring experience scored higher on performance (57.33±11.38) than those with previous caring experience (54.87±11.19), a statistically significant difference of 2.47 (95% CI, 0.54 to 4.38), t(533)=2.52, p=.012. Emotional intelligence scores were not associated with performance. Social connection scores for those withdrawing (mean rank=249) and those remaining (mean rank=304.75) were statistically significantly different, U=15,300, z=-2.61, p$_amp_$lt;0.009. Previous caring experience led to worse performance in this cohort. Emotional intelligence was not a useful indicator of performance. Lower scores on the social connection factor were associated with withdrawal from the course. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fluvial experiments using inertial sensors.
NASA Astrophysics Data System (ADS)
Maniatis, Georgios; Valyrakis, Manousos; Hodge, Rebecca; Drysdale, Tim; Hoey, Trevor
2017-04-01
During the last four years we have announced results on the development of a smart pebble that is constructed and calibrated specifically for capturing the dynamics of coarse sediment motion in river beds, at a grain scale. In this presentation we report details of our experimental validation across a range of flow regimes. The smart pebble contains Inertial Measurements Units (IMUs), which are sensors capable of recording the inertial acceleration and the angular velocity of the rigid bodies into which they are attached. IMUs are available across a range of performance levels, with commensurate increase in size, cost and performance as one progresses from integrated-circuit devices for use in commercial applications such as gaming and mobile phones, to larger brick-sized systems sometimes found in industrial applications such as vibration monitoring and quality control, or even the rack-mount equipment used in some aerospace and navigation applications (which can go as far as to include lasers and optical components). In parallel with developments in commercial and industrial settings, geomorphologists started recently to explore means of deploying IMUs in smart pebbles. The less-expensive, chip-scale IMUs have been shown to have adequate performance for this application, as well as offering a sufficiently compact form-factor. Four prototype sensors have been developed so far, and the latest (400 g acceleration range, 50-200 Hz sampling frequency) has been tested in fluvial laboratory experiments. We present results from three different experimental regimes designed for the evaluation of this sensor: a) an entrainment threshold experiment ; b) a bed impact experiment ; and c) a rolling experiment. All experiments used a 100 mm spherical sensor, and set a) were repeated using an equivalent size elliptical sensor. The experiments were conducted in the fluvial laboratory of the University of Glasgow (0.9 m wide flume) under different hydraulic conditions. The use of IMU results into direct parametrization of the inertial forces of grains which for the tested grain sizes were, as expected, always comparable to the independently measured hydrodynamic forces. However, the validity of IMU measurements is subjected to specific design, processing and experimental considerations, and we present the results of our analysis of these.
Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data
NASA Astrophysics Data System (ADS)
Murata, Isao; Ohta, Masayuki; Kusaka, Sachie; Sato, Fuminobu; Miyamaru, Hiroyuki
2017-09-01
There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author's group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is "equally" due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A)) making neutrons conveying the contribution, indirect controbution of neutrons (B) making the neutrons (A) and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.
Freyberger, Alexius; Wilson, Vickie; Weimer, Marc; Tan, Shirlee; Tran, Hoai-Son; Ahr, Hans-Jürgen
2010-08-01
Despite about two decades of research in the field of endocrine active compounds, still no validated human recombinant (hr) estrogen receptor-alpha (ERalpha) binding assay is available, although hr-ERalpha is available from several sources. In a joint effort, US EPA and Bayer Schering Pharma with funding from the EU-sponsored 6th framework project, ReProTect, developed a model protocol for such a binding assay. Important features of this assay are the use of a full length hr-ERalpha and performance in a 96-well plate format. A full length hr-ERalpha was chosen, as it was considered to provide the most accurate and human-relevant results, whereas truncated receptors could perform differently. Besides three reference compounds [17beta-estradiol, norethynodrel, dibutylphthalate] nine test compounds with different affinities for the ERalpha [diethylstilbestrol (DES), ethynylestradiol, meso-hexestrol, equol, genistein, o,p'-DDT, nonylphenol, n-butylparaben, and corticosterone] were used to explore the performance of the assay. Three independent experiments per compound were performed on different days, and dilutions of test compounds from deep-frozen stocks, solutions of radiolabeled ligand and receptor preparation were freshly prepared for each experiment. The ERalpha binding properties of reference and test compounds were well detected. As expected dibutylphthalate and corticosterone were non-binders in this assay. In terms of the relative ranking of binding affinities, there was good agreement with published data obtained from experiments using a human recombinant ERalpha ligand binding domain. Irrespective of the chemical nature of the compound, individual IC(50)-values for a given compound varied by not more than a factor of 2.5. Our data demonstrate that the assay was robust and reliably ranked compounds with strong, weak, and no affinity for the ERalpha with high accuracy. It avoids the manipulation and use of animals, i.e., the preparation of uterine cytosol as receptor source from ovariectomized rats, as a recombinant protein is used and thus contributes to the 3R concept (reduce, replace, and refine). Furthermore, in contrast to other assays, this assay could be adjusted to an intermediate/high throughput format. On the whole, this assay is a promising candidate for further validation. Copyright 2010 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess; J. Blair Briggs; Jim Gulliford
2014-10-01
The International Reactor Physics Experiment Evaluation Project (IRPhEP) is a widely recognized world class program. The work of the IRPhEP is documented in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Integral data from the IRPhEP Handbook is used by reactor safety and design, nuclear data, criticality safety, and analytical methods development specialists, worldwide, to perform necessary validations of their calculational techniques. The IRPhEP Handbook is among the most frequently quoted reference in the nuclear industry and is expected to be a valuable resource for future decades.
Spacelab Life Sciences 1 - The stepping stone
NASA Technical Reports Server (NTRS)
Dalton, B. P.; Leon, H.; Hogan, R.; Clarke, B.; Tollinger, D.
1988-01-01
The Spacelab Life Sciences (SLS-1) mission scheduled for launch in March 1990 will study the effects of microgravity on physiological parameters of humans and animals. The data obtained will guide equipment design, performance of activities involving the use of animals, and prediction of human physiological responses during long-term microgravity exposure. The experiments planned for the SLS-1 mission include a particulate-containment demonstration test, integrated rodent experiments, jellyfish experiments, and validation of the small-mass measuring instrument. The design and operation of the Research Animal Holding Facility, General-Purpose Work Station, General-Purpose Transfer Unit, and Animal Enclosure Module are discussed and illustrated with drawings and diagrams.
NASA Astrophysics Data System (ADS)
Chen, Long; Zhang, Yidu; Wu, Qiong; Jie, Zhang
2018-02-01
A graphene coating anti-/de-icing experiment was proposed by employing water-borne and oily graphene coatings on the composite material anti-/de-icing component. Considering the characteristics of helicopter rotor sensitivity to icing, a new graphene coating enhancing thermal conductivity of anti-/de-icing component was proposed. The anti-/de-icing experiment was conducted to validate the effectiveness of graphene coating. The results of the experiment show that the graphene coatings play a prominent role in controlling the heat transfer of anti-/de-icing component. The anti-/de-icing effect of oily graphene coating is superior to water-borne graphene.
EVALUATING ROBOT TECHNOLOGIES AS TOOLS TO EXPLORE RADIOLOGICAL AND OTHER HAZARDOUS ENVIRONMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis W. Nielsen; David I. Gertman; David J. Bruemmer
2008-03-01
There is a general consensus that robots could be beneficial in performing tasks within hazardous radiological environments. Most control of robots in hazardous environments involves master-slave or teleoperation relationships between the human and the robot. While teleoperation-based solutions keep humans out of harms way, they also change the training requirements to accomplish a task. In this paper we present a research methodology that allowed scientists at Idaho National Laboratory to identify, develop, and prove a semi-autonomous robot solution for search and characterization tasks within a hazardous environment. Two experiments are summarized that validated the use of semi-autonomy and show thatmore » robot autonomy can help mitigate some of the performance differences between operators who have different levels of robot experience, and can improve performance over teleoperated systems.« less
Can virtual reality simulation be used for advanced bariatric surgical training?
Lewis, Trystan M; Aggarwal, Rajesh; Kwasnicki, Richard M; Rajaretnam, Niro; Moorthy, Krishna; Ahmed, Ahmed; Darzi, Ara
2012-06-01
Laparoscopic bariatric surgery is a safe and effective way of treating morbid obesity. However, the operations are technically challenging and training opportunities for junior surgeons are limited. This study aims to assess whether virtual reality (VR) simulation is an effective adjunct for training and assessment of laparoscopic bariatric technical skills. Twenty bariatric surgeons of varying experience (Five experienced, five intermediate, and ten novice) were recruited to perform a jejuno-jejunostomy on both cadaveric tissue and on the bariatric module of the Lapmentor VR simulator (Simbionix Corporation, Cleveland, OH). Surgical performance was assessed using validated global rating scales (GRS) and procedure specific video rating scales (PSRS). Subjects were also questioned about the appropriateness of VR as a training tool for surgeons. Construct validity of the VR bariatric module was demonstrated with a significant difference in performance between novice and experienced surgeons on the VR jejuno-jejunostomy module GRS (median 11-15.5; P = .017) and PSRS (median 11-13; P = .003). Content validity was demonstrated with surgeons describing the VR bariatric module as useful and appropriate for training (mean Likert score 4.45/7) and they would highly recommend VR simulation to others for bariatric training (mean Likert score 5/7). Face and concurrent validity were not established. This study shows that the bariatric module on a VR simulator demonstrates construct and content validity. VR simulation appears to be an effective method for training of advanced bariatric technical skills for surgeons at the start of their bariatric training. However, assessment of technical skills should still take place on cadaveric tissue. Copyright © 2012. Published by Mosby, Inc.
Development and Validation of an Internet Use Attitude Scale
ERIC Educational Resources Information Center
Zhang, Yixin
2007-01-01
This paper describes the development and validation of a new 40-item Internet Attitude Scale (IAS), a one-dimensional inventory for measuring the Internet attitudes. The first experiment initiated a generic Internet attitude questionnaire, ensured construct validity, and examined factorial validity and reliability. The second experiment further…
Teaching "Instant Experience" with Graphical Model Validation Techniques
ERIC Educational Resources Information Center
Ekstrøm, Claus Thorn
2014-01-01
Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.
Getting the most out of RNA-seq data analysis.
Khang, Tsung Fei; Lau, Ching Yee
2015-01-01
Background. A common research goal in transcriptome projects is to find genes that are differentially expressed in different phenotype classes. Biologists might wish to validate such gene candidates experimentally, or use them for downstream systems biology analysis. Producing a coherent differential gene expression analysis from RNA-seq count data requires an understanding of how numerous sources of variation such as the replicate size, the hypothesized biological effect size, and the specific method for making differential expression calls interact. We believe an explicit demonstration of such interactions in real RNA-seq data sets is of practical interest to biologists. Results. Using two large public RNA-seq data sets-one representing strong, and another mild, biological effect size-we simulated different replicate size scenarios, and tested the performance of several commonly-used methods for calling differentially expressed genes in each of them. We found that, when biological effect size was mild, RNA-seq experiments should focus on experimental validation of differentially expressed gene candidates. Importantly, at least triplicates must be used, and the differentially expressed genes should be called using methods with high positive predictive value (PPV), such as NOISeq or GFOLD. In contrast, when biological effect size was strong, differentially expressed genes mined from unreplicated experiments using NOISeq, ASC and GFOLD had between 30 to 50% mean PPV, an increase of more than 30-fold compared to the cases of mild biological effect size. Among methods with good PPV performance, having triplicates or more substantially improved mean PPV to over 90% for GFOLD, 60% for DESeq2, 50% for NOISeq, and 30% for edgeR. At a replicate size of six, we found DESeq2 and edgeR to be reasonable methods for calling differentially expressed genes at systems level analysis, as their PPV and sensitivity trade-off were superior to the other methods'. Conclusion. When biological effect size is weak, systems level investigation is not possible using RNAseq data, and no meaningful result can be obtained in unreplicated experiments. Nonetheless, NOISeq or GFOLD may yield limited numbers of gene candidates with good validation potential, when triplicates or more are available. When biological effect size is strong, NOISeq and GFOLD are effective tools for detecting differentially expressed genes in unreplicated RNA-seq experiments for qPCR validation. When triplicates or more are available, GFOLD is a sharp tool for identifying high confidence differentially expressed genes for targeted qPCR validation; for downstream systems level analysis, combined results from DESeq2 and edgeR are useful.
Current status of validation for robotic surgery simulators - a systematic review.
Abboudi, Hamid; Khan, Mohammed S; Aboumarzouk, Omar; Guru, Khurshid A; Challacombe, Ben; Dasgupta, Prokar; Ahmed, Kamran
2013-02-01
To analyse studies validating the effectiveness of robotic surgery simulators. The MEDLINE(®), EMBASE(®) and PsycINFO(®) databases were systematically searched until September 2011. References from retrieved articles were reviewed to broaden the search. The simulator name, training tasks, participant level, training duration and evaluation scoring were extracted from each study. We also extracted data on feasibility, validity, cost-effectiveness, reliability and educational impact. We identified 19 studies investigating simulation options in robotic surgery. There are five different robotic surgery simulation platforms available on the market. In all, 11 studies sought opinion and compared performance between two different groups; 'expert' and 'novice'. Experts ranged in experience from 21-2200 robotic cases. The novice groups consisted of participants with no prior experience on a robotic platform and were often medical students or junior doctors. The Mimic dV-Trainer(®), ProMIS(®), SimSurgery Educational Platform(®) (SEP) and Intuitive systems have shown face, content and construct validity. The Robotic Surgical SimulatorTM system has only been face and content validated. All of the simulators except SEP have shown educational impact. Feasibility and cost-effectiveness of simulation systems was not evaluated in any trial. Virtual reality simulators were shown to be effective training tools for junior trainees. Simulation training holds the greatest potential to be used as an adjunct to traditional training methods to equip the next generation of robotic surgeons with the skills required to operate safely. However, current simulation models have only been validated in small studies. There is no evidence to suggest one type of simulator provides more effective training than any other. More research is needed to validate simulated environments further and investigate the effectiveness of animal and cadaveric training in robotic surgery. © 2012 BJU International.
Demekhin, E A; Kalaidin, E N; Kalliadasis, S; Vlaskin, S Yu
2010-09-01
We validate experimentally the Kapitsa-Shkadov model utilized in the theoretical studies by Demekhin [Phys. Fluids 19, 114103 (2007)10.1063/1.2793148; Phys. Fluids 19, 114104 (2007)]10.1063/1.2793149 of surface turbulence on a thin liquid film flowing down a vertical planar wall. For water at 15° , surface turbulence typically occurs at an inlet Reynolds number of ≃40 . Of particular interest is to assess experimentally the predictions of the model for three-dimensional nonlinear localized coherent structures, which represent elementary processes of surface turbulence. For this purpose we devise simple experiments to investigate the instabilities and transitions leading to such structures. Our experimental results are in good agreement with the theoretical predictions of the model. We also perform time-dependent computations for the formation of coherent structures and their interaction with localized structures of smaller amplitude on the surface of the film.
Upgrades for the CMS simulation
Lange, D. J.; Hildreth, M.; Ivantchenko, V. N.; ...
2015-05-22
Over the past several years, the CMS experiment has made significant changes to its detector simulation application. The geometry has been generalized to include modifications being made to the CMS detector for 2015 operations, as well as model improvements to the simulation geometry of the current CMS detector and the implementation of a number of approved and possible future detector configurations. These include both completely new tracker and calorimetry systems. We have completed the transition to Geant4 version 10, we have made significant progress in reducing the CPU resources required to run our Geant4 simulation. These have been achieved throughmore » both technical improvements and through numerical techniques. Substantial speed improvements have been achieved without changing the physics validation benchmarks that the experiment uses to validate our simulation application for use in production. As a result, we will discuss the methods that we implemented and the corresponding demonstrated performance improvements deployed for our 2015 simulation application.« less
Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements
NASA Astrophysics Data System (ADS)
Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.
2012-12-01
The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some validation experiments demonstrated that the models yield accurate estimates at flux measurement sites, the question remains whether they are performing well over the broader landscape. Moreover, a large number of RS_ET products have been released in recent years. Thus, we also pay attention to the cross-validation method of RS_ET derived from multi-source models. "The Multi-scale Observation Experiment on Evapotranspiration over Heterogeneous Land Surfaces: Flux Observation Matrix" campaign is carried out at the middle reaches of the Heihe River Basin, China in 2012. Flux measurements from an observation matrix composed of 22 EC and 4 LAS are acquired to investigate the cross-validation of multi-source models over different landscapes. In this case, six remote sensing models, including the empirical statistical model, the one-source and two-source models, the Penman-Monteith equation based model, the Priestley-Taylor equation based model, and the complementary relationship based model, are used to perform an intercomparison. All the results from the two cases of RS_ET validation showed that the proposed validation methods are reasonable and feasible.
Aleksejevs, Aleksandrs; Barkanova, Svetlana; Ilyichev, Alexander; ...
2010-11-19
We perform updated and detailed calculations of the complete NLO set of electroweak radiative corrections to parity violating e – e – → e – e – (γ) scattering asymmetries at energies relevant for the ultra-precise Moller experiment coming soon at JLab. Our numerical results are presented for a range of experimental cuts and relative importance of various contributions is analyzed. In addition, we also provide very compact expressions analytically free from non-physical parameters and show them to be valid for fast yet accurate estimations.
Autonomous GPS/INS navigation experiment for Space Transfer Vehicle
NASA Technical Reports Server (NTRS)
Upadhyay, Triveni N.; Cotterill, Stephen; Deaton, A. W.
1993-01-01
An experiment to validate the concept of developing an autonomous integrated spacecraft navigation system using on board Global Positioning System (GPS) and Inertial Navigation System (INS) measurements is described. The feasibility of integrating GPS measurements with INS measurements to provide a total improvement in spacecraft navigation performance, i.e. improvement in position, velocity and attitude information, was previously demonstrated. An important aspect of this research is the automatic real time reconfiguration capability of the system designed to respond to changes in a spacecraft mission under the control of an expert system.
Autonomous GPS/INS navigation experiment for Space Transfer Vehicle (STV)
NASA Technical Reports Server (NTRS)
Upadhyay, Triveni N.; Cotterill, Stephen; Deaton, A. Wayne
1991-01-01
An experiment to validate the concept of developing an autonomous integrated spacecraft navigation system using on board Global Positioning System (GPS) and Inertial Navigation System (INS) measurements is described. The feasibility of integrating GPS measurements with INS measurements to provide a total improvement in spacecraft navigation performance, i.e. improvement in position, velocity and attitude information, was previously demonstrated. An important aspect of this research is the automatic real time reconfiguration capability of the system designed to respond to changes in a spacecraft mission under the control of an expert system.
Vignati, A. M.; Aguirre, C. P.; Artusa, D. R.; ...
2015-03-24
CUORE-0 is an experiment built to test and demonstrate the performance of the upcoming CUORE experiment. Composed of 52 TeO 2 bolometers of 750 g each, it is expected to reach a sensitivity to the 0νββ half-life of 130Te around 3 · 10 24 y in one year of live time. We present the first data, corresponding to an exposure of 7.1 kg y. An analysis of the background indicates that the CUORE sensitivity goal is within reach, validating our techniques to reduce the α radioactivity of the detector.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Wei-Yang
Foam materials are used to protect sensitive components from impact loading. In order to predict and simulate the foam performance under various loading conditions, a validated foam model is needed and the mechanical properties of foams need to be characterized. Uniaxial compression and tension tests were conducted for different densities of foams under various temperatures and loading rates. Crush stress, tensile strength, and elastic modulus were obtained. A newly developed confined compression experiment provided data for investigating the foam flow direction. A biaxial tension experiment was also developed to explore the damage surface of a rigid polyurethane foam.
NASA Astrophysics Data System (ADS)
Vignati, A. M.; Aguirre, C. P.; Artusa, D. R.; Avignone, F. T., III; Azzolini, O.; Balata, M.; Banks, T. I.; Bari, G.; Beeman, J.; Bellini, F.; Bersani, A.; Biassoni, M.; Brofferio, C.; Bucci, C.; Cai, X. Z.; Camacho, A.; Canonica, L.; Cao, X.; Capelli, S.; Carbone, L.; Cardani, L.; Carrettoni, M.; Casali, N.; Chiesa, D.; Chott, N.; Clemenza, M.; Cosmelli, C.; Cremonesi, O.; Creswick, R. J.; Dafinei, I.; Dally, A.; Datskov, V.; De Biasi, A.; Deninno, M. M.; Di Domizio, S.; di Vacri, M. L.; Ejzak, L.; Fang, D. Q.; Farach, H. A.; Faverzani, M.; Fernandes, G.; Ferri, E.; Ferroni, F.; Fiorini, E.; Franceschi, M. A.; Freedman, S. J.; Fujikawa, B. K.; Giachero, A.; Gironi, L.; Giuliani, A.; Goett, J.; Gorla, P.; Gotti, C.; Gutierrez, T. D.; Haller, E. E.; Han, K.; Heeger, K. M.; Hennings-Yeomans, R.; Huang, H. Z.; Kadel, R.; Kazkaz, K.; Keppel, G.; Kolomensky, Yu. G.; Li, Y. L.; Ligi, C.; Lim, K. E.; Liu, X.; Ma, Y. G.; Maiano, C.; Maino, M.; Martinez, M.; Maruyama, R. H.; Mei, Y.; Moggi, N.; Morganti, S.; Napolitano, T.; Nisi, S.; Nones, C.; Norman, E. B.; Nucciotti, A.; O'Donnell, T.; Orio, F.; Orlandi, D.; Ouellet, J. L.; Pallavicini, M.; Palmieri, V.; Pattavina, L.; Pavan, M.; Pedretti; Pessina, G.; Piperno, G.; Pira, C.; Pirro, S.; Previtali, E.; Rampazzo, V.; Rosenfeld, C.; Rusconi, C.; Sala, E.; Sangiorgio, S.; Scielzo, N. D.; Sisti, M.; Smith, A. R.; Taffarello, L.; Tenconi, M.; Terranova, F.; Tian, W. D.; Tomei, C.; Trentalange, S.; Ventura, G.; Wang, B. S.; Wang, H. W.; Wielgus, L.; Wilson, J.; Winslow, L. A.; Wise, T.; Woodcraft, A.; Zanotti, L.; Zarra, C.; Zhu, B. X.; Zucchelli, S.
CUORE-0 is an experiment built to test and demonstrate the performance of the upcoming CUORE experiment. Com- posed of 52 TeO2 bolometers of 750 g each, it is expected to reach a sensitivity to the 0νββ half-life of 130Te around 3 · 1024 y in one year of live time. We present the first data, corresponding to an exposure of 7.1 kg y. An analysis of the background indicates that the CUORE sensitivity goal is within reach, validating our techniques to reduce the α radioactivity of the detector.
Autonomous GPS/INS navigation experiment for Space Transfer Vehicle
NASA Astrophysics Data System (ADS)
Upadhyay, Triveni N.; Cotterill, Stephen; Deaton, A. W.
1993-07-01
An experiment to validate the concept of developing an autonomous integrated spacecraft navigation system using on board Global Positioning System (GPS) and Inertial Navigation System (INS) measurements is described. The feasibility of integrating GPS measurements with INS measurements to provide a total improvement in spacecraft navigation performance, i.e. improvement in position, velocity and attitude information, was previously demonstrated. An important aspect of this research is the automatic real time reconfiguration capability of the system designed to respond to changes in a spacecraft mission under the control of an expert system.
Definition and Demonstration of a Methodology for Validating Aircraft Trajectory Predictors
NASA Technical Reports Server (NTRS)
Vivona, Robert A.; Paglione, Mike M.; Cate, Karen T.; Enea, Gabriele
2010-01-01
This paper presents a new methodology for validating an aircraft trajectory predictor, inspired by the lessons learned from a number of field trials, flight tests and simulation experiments for the development of trajectory-predictor-based automation. The methodology introduces new techniques and a new multi-staged approach to reduce the effort in identifying and resolving validation failures, avoiding the potentially large costs associated with failures during a single-stage, pass/fail approach. As a case study, the validation effort performed by the Federal Aviation Administration for its En Route Automation Modernization (ERAM) system is analyzed to illustrate the real-world applicability of this methodology. During this validation effort, ERAM initially failed to achieve six of its eight requirements associated with trajectory prediction and conflict probe. The ERAM validation issues have since been addressed, but to illustrate how the methodology could have benefited the FAA effort, additional techniques are presented that could have been used to resolve some of these issues. Using data from the ERAM validation effort, it is demonstrated that these new techniques could have identified trajectory prediction error sources that contributed to several of the unmet ERAM requirements.
CMS endcap RPC performance analysis
NASA Astrophysics Data System (ADS)
Teng, H.; CMS Collaboration
2014-08-01
The Resistive Plate Chamber (RPC) detector system in LHC-CMS experiment is designed for the trigger purpose. The endcap RPC system has been successfully operated since the commissioning period (2008) to the end of RUN1 (2013). We have developed an analysis tool for endcap RPC performance and validated the efficiency calculation algorithm, focusing on the first endcap station which was assembled and tested by the Peking University group. We cross checked the results obtained with those extracted with alternative methods and we found good agreement in terms of performance parameters [1]. The results showed that the CMS-RPC endcap system fulfilled the performance expected in the Technical Design Report [2].
A Hybrid Reality Radiation-free Simulator for Teaching Wire Navigation Skills
Kho, Jenniefer Y.; Johns, Brian D.; Thomas, Geb. W.; Karam, Matthew D.; Marsh, J. Lawrence; Anderson, Donald D.
2016-01-01
Objectives Surgical simulation is an increasingly important method to facilitate the acquiring of surgical skills. Simulation can be helpful in developing hip fracture fixation skills because it is a common procedure for which performance can be objectively assessed (i.e., the tip-apex distance). The procedure requires fluoroscopic guidance to drill a wire along an osseous trajectory to a precise position within bone. The objective of this study was to assess the construct validity for a novel radiation-free simulator designed to teach wire navigation skills in hip fracture fixation. Methods Novices (N=30) with limited to no surgical experience in hip fracture fixation and experienced surgeons (N=10) participated. Participants drilled a guide wire in the center-center position of a synthetic femoral head in a hip fracture simulator, using electromagnetic sensors to track the guide wire position. Sensor data were gathered to generate fluoroscopic-like images of the hip and guide wire. Simulator performance of novice and experienced participants was compared to measure construct validity. Results The simulator was able to discriminate the accuracy in guide wire position between novices and experienced surgeons. Experienced surgeons achieved a more accurate tip-apex distance than novices (13 vs 23 mm, respectively, p=0.009). The magnitude of improvement on successive simulator attempts was dependent on level of expertise; tip-apex distance improved significantly in the novice group, while it was unchanged in the experienced group. Conclusions This hybrid reality, radiation-free hip fracture simulator, which combines real-world objects with computer-generated imagery demonstrates construct validity by distinguishing the performance of novices and experienced surgeons. There is a differential effect depending on level of experience, and it could be used as an effective training tool in novice surgeons. PMID:26165262
Assessment of a recombinant androgen receptor binding assay: initial steps towards validation.
Freyberger, Alexius; Weimer, Marc; Tran, Hoai-Son; Ahr, Hans-Jürgen
2010-08-01
Despite more than a decade of research in the field of endocrine active compounds with affinity for the androgen receptor (AR), still no validated recombinant AR binding assay is available, although recombinant AR can be obtained from several sources. With funding from the European Union (EU)-sponsored 6th framework project, ReProTect, we developed a model protocol for such an assay based on a simple AR binding assay recently developed at our institution. Important features of the protocol were the use of a rat recombinant fusion protein to thioredoxin containing both the hinge region and ligand binding domain (LBD) of the rat AR (which is identical to the human AR-LBD) and performance in a 96-well plate format. Besides two reference compounds [dihydrotestosterone (DHT), androstenedione] ten test compounds with different affinities for the AR [levonorgestrel, progesterone, prochloraz, 17alpha-methyltestosterone, flutamide, norethynodrel, o,p'-DDT, dibutylphthalate, vinclozolin, linuron] were used to explore the performance of the assay. At least three independent experiments per compound were performed. The AR binding properties of reference and test compounds were well detected, in terms of the relative ranking of binding affinities, there was good agreement with published data obtained from experiments using recombinant AR preparations. Irrespective of the chemical nature of the compound, individual IC(50)-values for a given compound varied by not more than a factor of 2.6. Our data demonstrate that the assay reliably ranked compounds with strong, weak, and no/marginal affinity for the AR with high accuracy. It avoids the manipulation and use of animals, as a recombinant protein is used and thus contributes to the 3R concept. On the whole, this assay is a promising candidate for further validation. Copyright 2009 Elsevier Inc. All rights reserved.
New Reactor Physics Benchmark Data in the March 2012 Edition of the IRPhEP Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess; J. Blair Briggs; Jim Gulliford
2012-11-01
The International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications. Numerous experiments that have been performed worldwide, represent a large investment of infrastructure, expertise, and cost, and are valuable resources of data for present and future research. These valuable assets provide the basis for recording, development, and validation of methods. If the experimental data are lost, the high cost to repeat many of these measurements may be prohibitive. The purpose of the IRPhEP is to provide an extensively peer-reviewed set ofmore » reactor physics-related integral data that can be used by reactor designers and safety analysts to validate the analytical tools used to design next-generation reactors and establish the safety basis for operation of these reactors. Contributors from around the world collaborate in the evaluation and review of selected benchmark experiments for inclusion in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [1]. Several new evaluations have been prepared for inclusion in the March 2012 edition of the IRPhEP Handbook.« less
Kim, Hyerin; Kang, NaNa; An, KyuHyeon; Koo, JaeHyung; Kim, Min-Soo
2016-01-01
Design of high-quality primers for multiple target sequences is essential for qPCR experiments, but is challenging due to the need to consider both homology tests on off-target sequences and the same stringent filtering constraints on the primers. Existing web servers for primer design have major drawbacks, including requiring the use of BLAST-like tools for homology tests, lack of support for ranking of primers, TaqMan probes and simultaneous design of primers against multiple targets. Due to the large-scale computational overhead, the few web servers supporting homology tests use heuristic approaches or perform homology tests within a limited scope. Here, we describe the MRPrimerW, which performs complete homology testing, supports batch design of primers for multi-target qPCR experiments, supports design of TaqMan probes and ranks the resulting primers to return the top-1 best primers to the user. To ensure high accuracy, we adopted the core algorithm of a previously reported MapReduce-based method, MRPrimer, but completely redesigned it to allow users to receive query results quickly in a web interface, without requiring a MapReduce cluster or a long computation. MRPrimerW provides primer design services and a complete set of 341 963 135 in silico validated primers covering 99% of human and mouse genes. Free access: http://MRPrimerW.com. PMID:27154272
NASA Astrophysics Data System (ADS)
Agafonova, N.; Aleksandrov, A.; Anokhina, A.; Aoki, S.; Ariga, A.; Ariga, T.; Bender, D.; Bertolin, A.; Bozza, C.; Brugnera, R.; Buonaura, A.; Buontempo, S.; Büttner, B.; Chernyavsky, M.; Chukanov, A.; Consiglio, L.; D'Ambrosio, N.; De Lellis, G.; De Serio, M.; Del Amo Sanchez, P.; Di Crescenzo, A.; Di Ferdinando, D.; Di Marco, N.; Dmitrievski, S.; Dracos, M.; Duchesneau, D.; Dusini, S.; Dzhatdoev, T.; Ebert, J.; Ereditato, A.; Fini, R. A.; Fukuda, T.; Galati, G.; Garfagnini, A.; Giacomelli, G.; Göllnitz, C.; Goldberg, J.; Gornushkin, Y.; Grella, G.; Guler, M.; Gustavino, C.; Hagner, C.; Hara, T.; Hollnagel, A.; Hosseini, B.; Ishida, H.; Ishiguro, K.; Jakovcic, K.; Jollet, C.; Kamiscioglu, C.; Kamiscioglu, M.; Kawada, J.; Kim, J. H.; Kim, S. H.; Kitagawa, N.; Klicek, B.; Kodama, K.; Komatsu, M.; Kose, U.; Kreslo, I.; Lauria, A.; Lenkeit, J.; Ljubicic, A.; Longhin, A.; Loverre, P.; Malgin, A.; Malenica, M.; Mandrioli, G.; Matsuo, T.; Matveev, V.; Mauri, N.; Medinaceli, E.; Meregaglia, A.; Mikado, S.; Monacelli, P.; Montesi, M. C.; Morishima, K.; Muciaccia, M. T.; Naganawa, N.; Naka, T.; Nakamura, M.; Nakano, T.; Nakatsuka, Y.; Niwa, K.; Ogawa, S.; Okateva, N.; Olshevsky, A.; Omura, T.; Ozaki, K.; Paoloni, A.; Park, B. D.; Park, I. G.; Pasqualini, L.; Pastore, A.; Patrizii, L.; Pessard, H.; Pistillo, C.; Podgrudkov, D.; Polukhina, N.; Pozzato, M.; Pupilli, F.; Roda, M.; Rokujo, H.; Roganova, T.; Rosa, G.; Ryazhskaya, O.; Sato, O.; Schembri, A.; Shakiryanova, I.; Shchedrina, T.; Sheshukov, A.; Shibuya, H.; Shiraishi, T.; Shoziyoev, G.; Simone, S.; Sioli, M.; Sirignano, C.; Sirri, G.; Spinetti, M.; Stanco, L.; Starkov, N.; Stellacci, S. M.; Stipcevic, M.; Strauss, T.; Strolin, P.; Takahashi, S.; Tenti, M.; Terranova, F.; Tioukov, V.; Tufanli, S.; Vilain, P.; Vladimirov, M.; Votano, L.; Vuilleumier, J. L.; Wilquet, G.; Wonsak, B.; Yoon, C. S.; Zemskova, S.; Zghiche, A.
2014-08-01
The OPERA experiment, designed to perform the first observation of oscillations in appearance mode through the detection of the leptons produced in charged current interactions, has collected data from 2008 to 2012. In the present paper, the procedure developed to detect particle decays, occurring over distances of the order of from the neutrino interaction point, is described in detail and applied to the search for charmed hadrons, showing similar decay topologies as the lepton. In the analysed sample, 50 charm decay candidate events are observed while are expected, proving that the detector performance and the analysis chain applied to neutrino events are well reproduced by the OPERA simulation and thus validating the methods for appearance detection.
Bertram, Christof A; Gurtner, Corinne; Dettwiler, Martina; Kershaw, Olivia; Dietert, Kristina; Pieper, Laura; Pischon, Hannah; Gruber, Achim D; Klopfleisch, Robert
2018-07-01
Integration of new technologies, such as digital microscopy, into a highly standardized laboratory routine requires the validation of its performance in terms of reliability, specificity, and sensitivity. However, a validation study of digital microscopy is currently lacking in veterinary pathology. The aim of the current study was to validate the usability of digital microscopy in terms of diagnostic accuracy, speed, and confidence for diagnosing and differentiating common canine cutaneous tumor types and to compare it to classical light microscopy. Therefore, 80 histologic sections including 17 different skin tumor types were examined twice as glass slides and twice as digital whole-slide images by 6 pathologists with different levels of experience at 4 time points. Comparison of both methods found digital microscopy to be noninferior for differentiating individual tumor types within the category epithelial and mesenchymal tumors, but diagnostic concordance was slightly lower for differentiating individual round cell tumor types by digital microscopy. In addition, digital microscopy was associated with significantly shorter diagnostic time, but diagnostic confidence was lower and technical quality was considered inferior for whole-slide images compared with glass slides. Of note, diagnostic performance for whole-slide images scanned at 200× magnification was noninferior in diagnostic performance for slides scanned at 400×. In conclusion, digital microscopy differs only minimally from light microscopy in few aspects of diagnostic performance and overall appears adequate for the diagnosis of individual canine cutaneous tumors with minor limitations for differentiating individual round cell tumor types and grading of mast cell tumors.
Model-Based Verification and Validation of Spacecraft Avionics
NASA Technical Reports Server (NTRS)
Khan, M. Omair; Sievers, Michael; Standley, Shaun
2012-01-01
Verification and Validation (V&V) at JPL is traditionally performed on flight or flight-like hardware running flight software. For some time, the complexity of avionics has increased exponentially while the time allocated for system integration and associated V&V testing has remained fixed. There is an increasing need to perform comprehensive system level V&V using modeling and simulation, and to use scarce hardware testing time to validate models; the norm for thermal and structural V&V for some time. Our approach extends model-based V&V to electronics and software through functional and structural models implemented in SysML. We develop component models of electronics and software that are validated by comparison with test results from actual equipment. The models are then simulated enabling a more complete set of test cases than possible on flight hardware. SysML simulations provide access and control of internal nodes that may not be available in physical systems. This is particularly helpful in testing fault protection behaviors when injecting faults is either not possible or potentially damaging to the hardware. We can also model both hardware and software behaviors in SysML, which allows us to simulate hardware and software interactions. With an integrated model and simulation capability we can evaluate the hardware and software interactions and identify problems sooner. The primary missing piece is validating SysML model correctness against hardware; this experiment demonstrated such an approach is possible.
Study on bamboo gluing performance numerical simulation
NASA Astrophysics Data System (ADS)
Zhao, Z. R.; Sun, W. H.; Sui, X. M.; Zhang, X. F.
2018-01-01
Bamboo gluing timber is a green building materials, can be widely used as modern building beams and columns. The existing bamboo gluing timber is usually produced by bamboo columns or bamboo bundle rolled into by bamboo columns. The performance of new bamboo gluing timber is decided by bamboo adhesion character. Based on this, the cohesive damage model of bamboo gluing is created, experiment results are used to validate the model. The model proposed in the work is agreed on the experimental results. Different bamboo bonding length and bamboo gluing performance is analysed. The model is helpful to bamboo integrated timber application.
Cooperation based dynamic team formation in multi-agent auctions
NASA Astrophysics Data System (ADS)
Pippin, Charles E.; Christensen, Henrik
2012-06-01
Auction based methods are often used to perform distributed task allocation on multi-agent teams. Many existing approaches to auctions assume fully cooperative team members. On in-situ and dynamically formed teams, reciprocal collaboration may not always be a valid assumption. This paper presents an approach for dynamically selecting auction partners based on observed team member performance and shared reputation. In addition, we present the use of a shared reputation authority mechanism. Finally, experiments are performed in simulation on multiple UAV platforms to highlight situations in which it is better to enforce cooperation in auctions using this approach.
A demonstration of direct access to colored stimuli following cueing by color.
Navon, David; Kasten, Ronen
2011-09-01
To test whether cueing by color can affect orienting without first computing the location of the cued color, the impact of reorienting on the validity effect was examined. In Experiment 1 subjects were asked to detect a black dot target presented at random on either of two colored forms. The forms started being presented 750 ms before the onset of a central cue (either an arrow or a colored square). In some proportion of the trials the colors switched locations 150 ms after cue onset, simultaneously with target onset. The color switch was not found to retard responses following a color cue more than following a location cue. Furthermore, it did not reduce the validity effect of the color cue: Though the validity effect of the location cue was quite larger than the validity effect of the color cue, both effects were additive with the presence/absence of a color switch. In Experiment 2, subjects were rather asked to detect a change in shape of one of the colored forms. In this case, color switch was found to affect performance even less following a color cue. The fact that across experiments, color switch did not retard neither responding nor orienting selectively in the color cue condition, indicates that when attention is set to a certain color, reorienting to a new object following color switch does not require re-computing the address of the cued color. That finding is argued to embarrass a strong space-based view of visual attention. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Rhode, Matthew N.; Oberkampf, William L.
2012-01-01
A high-quality model validation experiment was performed in the NASA Langley Research Center Unitary Plan Wind Tunnel to assess the predictive accuracy of computational fluid dynamics (CFD) models for a blunt-body supersonic retro-propulsion configuration at Mach numbers from 2.4 to 4.6. Static and fluctuating surface pressure data were acquired on a 5-inch-diameter test article with a forebody composed of a spherically-blunted, 70-degree half-angle cone and a cylindrical aft body. One non-powered configuration with a smooth outer mold line was tested as well as three different powered, forward-firing nozzle configurations: a centerline nozzle, three nozzles equally spaced around the forebody, and a combination with all four nozzles. A key objective of the experiment was the determination of experimental uncertainties from a range of sources such as random measurement error, flowfield non-uniformity, and model/instrumentation asymmetries. This paper discusses the design of the experiment towards capturing these uncertainties for the baseline non-powered configuration, the methodology utilized in quantifying the various sources of uncertainty, and examples of the uncertainties applied to non-powered and powered experimental results. The analysis showed that flowfield nonuniformity was the dominant contributor to the overall uncertainty a finding in agreement with other experiments that have quantified various sources of uncertainty.
The use of control groups in artificial grammar learning.
Reber, Rolf; Perruchet, Pierre
2003-01-01
Experimenters assume that participants of an experimental group have learned an artificial grammar if they classify test items with significantly higher accuracy than does a control group without training. The validity of such a comparison, however, depends on an additivity assumption: Learning is superimposed on the action of non-specific variables-for example, repetitions of letters, which modulate the performance of the experimental group and the control group to the same extent. In two experiments we were able to show that this additivity assumption does not hold. Grammaticality classifications in control groups without training (Experiments 1 and 2) depended on non-specific features. There were no such biases in the experimental groups. Control groups with training on randomized strings (Experiment 2) showed fewer biases than did control groups without training. Furthermore, we reanalysed published research and demonstrated that earlier experiments using control groups without training had produced similar biases in control group performances, bolstering the finding that using control groups without training is methodologically unsound.
Validation of a Novel Laparoscopic Adjustable Gastric Band Simulator
Sankaranarayanan, Ganesh; Adair, James D.; Halic, Tansel; Gromski, Mark A.; Lu, Zhonghua; Ahn, Woojin; Jones, Daniel B.; De, Suvranu
2011-01-01
Background Morbid obesity accounts for more than 90,000 deaths per year in the United States. Laparoscopic adjustable gastric banding (LAGB) is the second most common weight loss procedure performed in the US and the most common in Europe and Australia. Simulation in surgical training is a rapidly advancing field that has been adopted by many to prepare surgeons for surgical techniques and procedures. Study Aim The aim of our study was to determine face, construct and content validity for a novel virtual reality laparoscopic adjustable gastric band simulator. Methods Twenty-eight subjects were categorized into two groups (Expert and Novice), determined by their skill level in laparoscopic surgery. Experts consisted of subjects who had at least four years of laparoscopic training and operative experience. Novices consisted of subjects with medical training, but with less than four years of laparoscopic training. The subjects performed the virtual reality laparoscopic adjustable band surgery simulator. They were automatically scored, according to various tasks. The subjects then completed a questionnaire to evaluate face and content validity. Results On a 5-point Likert scale (1 – lowest score, 5 – highest score), the mean score for visual realism was 4.00 ± 0.67 and the mean score for realism of the interface and tool movements was 4.07 ± 0.77 [Face Validity]. There were significant differences in the performance of the two subject groups (Expert and Novice), based on total scores (p<0.001) [Construct Validity]. Mean scores for utility of the simulator, as addressed by the Expert group, was 4.50 ± 0.71 [Content Validity]. Conclusion We created a virtual reality laparoscopic adjustable gastric band simulator. Our initial results demonstrate excellent face, construct and content validity findings. To our knowledge, this is the first virtual reality simulator with haptic feedback for training residents and surgeons in the laparoscopic adjustable gastric banding procedure. PMID:20734069
Control of Flexible Structures (COFS) Flight Experiment Background and Description
NASA Technical Reports Server (NTRS)
Hanks, B. R.
1985-01-01
A fundamental problem in designing and delivering large space structures to orbit is to provide sufficient structural stiffness and static configuration precision to meet performance requirements. These requirements are directly related to control requirements and the degree of control system sophistication available to supplement the as-built structure. Background and rationale are presented for a research study in structures, structural dynamics, and controls using a relatively large, flexible beam as a focus. This experiment would address fundamental problems applicable to large, flexible space structures in general and would involve a combination of ground tests, flight behavior prediction, and instrumented orbital tests. Intended to be multidisciplinary but basic within each discipline, the experiment should provide improved understanding and confidence in making design trades between structural conservatism and control system sophistication for meeting static shape and dynamic response/stability requirements. Quantitative results should be obtained for use in improving the validity of ground tests for verifying flight performance analyses.
ERIC Educational Resources Information Center
Holzmann, Vered; Mischari, Shoshana; Goldberg, Shoshana; Ziv, Amitai
2012-01-01
Purpose: This article aims to present a unique systematic and validated method for creating a linkage between past experiences and management of future occurrences in an organization. Design/methodology/approach: The study is based on actual data accumulated in a series of projects performed in a major medical center. Qualitative and quantitative…
The teratology testing of cosmetics.
Spézia, François; Barrow, Paul C
2013-01-01
In Europe, the developmental toxicity testing (including teratogenicity) of new cosmetic ingredients is performed according to the Cosmetics Directive 76/768/EEC: only alternatives leading to full replacement of animal experiments should be used. This chapter presents the three scientifically validated animal alternative methods for the assessment of embryotoxicity: the embryonic stem cell test (EST), the micromass (MM) assay, and the whole embryo culture (WEC) assay.
System Identification of a Heaving Point Absorber: Design of Experiment and Device Modeling
Bacelli, Giorgio; Coe, Ryan; Patterson, David; ...
2017-04-01
Empirically based modeling is an essential aspect of design for a wave energy converter. These models are used in structural, mechanical and control design processes, as well as for performance prediction. The design of experiments and methods used to produce models from collected data have a strong impact on the quality of the model. This study considers the system identification and model validation process based on data collected from a wave tank test of a model-scale wave energy converter. Experimental design and data processing techniques based on general system identification procedures are discussed and compared with the practices often followedmore » for wave tank testing. The general system identification processes are shown to have a number of advantages. The experimental data is then used to produce multiple models for the dynamics of the device. These models are validated and their performance is compared against one and other. Furthermore, while most models of wave energy converters use a formulation with wave elevation as an input, this study shows that a model using a hull pressure sensor to incorporate the wave excitation phenomenon has better accuracy.« less
Validation of the new code package APOLLO2.8 for accurate PWR neutronics calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santamarina, A.; Bernard, D.; Blaise, P.
2013-07-01
This paper summarizes the Qualification work performed to demonstrate the accuracy of the new APOLLO2.S/SHEM-MOC package based on JEFF3.1.1 nuclear data file for the prediction of PWR neutronics parameters. This experimental validation is based on PWR mock-up critical experiments performed in the EOLE/MINERVE zero-power reactors and on P.I. Es on spent fuel assemblies from the French PWRs. The Calculation-Experiment comparison for the main design parameters is presented: reactivity of UOX and MOX lattices, depletion calculation and fuel inventory, reactivity loss with burnup, pin-by-pin power maps, Doppler coefficient, Moderator Temperature Coefficient, Void coefficient, UO{sub 2}-Gd{sub 2}O{sub 3} poisoning worth, Efficiency ofmore » Ag-In-Cd and B4C control rods, Reflector Saving for both standard 2-cm baffle and GEN3 advanced thick SS reflector. From this qualification process, calculation biases and associated uncertainties are derived. This code package APOLLO2.8 is already implemented in the ARCADIA new AREVA calculation chain for core physics and is currently under implementation in the future neutronics package of the French utility Electricite de France. (authors)« less
System Identification of a Heaving Point Absorber: Design of Experiment and Device Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacelli, Giorgio; Coe, Ryan; Patterson, David
Empirically based modeling is an essential aspect of design for a wave energy converter. These models are used in structural, mechanical and control design processes, as well as for performance prediction. The design of experiments and methods used to produce models from collected data have a strong impact on the quality of the model. This study considers the system identification and model validation process based on data collected from a wave tank test of a model-scale wave energy converter. Experimental design and data processing techniques based on general system identification procedures are discussed and compared with the practices often followedmore » for wave tank testing. The general system identification processes are shown to have a number of advantages. The experimental data is then used to produce multiple models for the dynamics of the device. These models are validated and their performance is compared against one and other. Furthermore, while most models of wave energy converters use a formulation with wave elevation as an input, this study shows that a model using a hull pressure sensor to incorporate the wave excitation phenomenon has better accuracy.« less
Spatial-temporal discriminant analysis for ERP-based brain-computer interface.
Zhang, Yu; Zhou, Guoxu; Zhao, Qibin; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej
2013-03-01
Linear discriminant analysis (LDA) has been widely adopted to classify event-related potential (ERP) in brain-computer interface (BCI). Good classification performance of the ERP-based BCI usually requires sufficient data recordings for effective training of the LDA classifier, and hence a long system calibration time which however may depress the system practicability and cause the users resistance to the BCI system. In this study, we introduce a spatial-temporal discriminant analysis (STDA) to ERP classification. As a multiway extension of the LDA, the STDA method tries to maximize the discriminant information between target and nontarget classes through finding two projection matrices from spatial and temporal dimensions collaboratively, which reduces effectively the feature dimensionality in the discriminant analysis, and hence decreases significantly the number of required training samples. The proposed STDA method was validated with dataset II of the BCI Competition III and dataset recorded from our own experiments, and compared to the state-of-the-art algorithms for ERP classification. Online experiments were additionally implemented for the validation. The superior classification performance in using few training samples shows that the STDA is effective to reduce the system calibration time and improve the classification accuracy, thereby enhancing the practicability of ERP-based BCI.
Melchiors, Jacob; Petersen, K; Todsen, T; Bohr, A; Konge, Lars; von Buchwald, Christian
2018-06-01
The attainment of specific identifiable competencies is the primary measure of progress in the modern medical education system. The system, therefore, requires a method for accurately assessing competence to be feasible. Evidence of validity needs to be gathered before an assessment tool can be implemented in the training and assessment of physicians. This evidence of validity must according to the contemporary theory on validity be gathered from specific sources in a structured and rigorous manner. The flexible pharyngo-laryngoscopy (FPL) is central to the otorhinolaryngologist. We aim to evaluate the flexible pharyngo-laryngoscopy assessment tool (FLEXPAT) created in a previous study and to establish a pass-fail level for proficiency. Eighteen physicians with different levels of experience (novices, intermediates, and experienced) were recruited to the study. Each performed an FPL on two patients. These procedures were video recorded, blinded, and assessed by two specialists. The score was expressed as the percentage of a possible max score. Cronbach's α was used to analyze internal consistency of the data, and a generalizability analysis was performed. The scores of the three different groups were explored, and a pass-fail level was determined using the contrasting groups' standard setting method. Internal consistency was strong with a Cronbach's α of 0.86. We found a generalizability coefficient of 0.72 sufficient for moderate stakes assessment. We found a significant difference between the novice and experienced groups (p < 0.001) and strong correlation between experience and score (Pearson's r = 0.75). The pass/fail level was established at 72% of the maximum score. Applying this pass-fail level in the test population resulted in half of the intermediary group receiving a failing score. We gathered validity evidence for the FLEXPAT according to the contemporary framework as described by Messick. Our results support a claim of validity and are comparable to other studies exploring clinical assessment tools. The high rate of physicians underperforming in the intermediary group demonstrates the need for continued educational intervention. Based on our work, we recommend the use of the FLEXPAT in clinical assessment of FPL and the application of a pass-fail level of 72% for proficiency.
Scheerhagen, Marisja; van Stel, Henk F.; Birnie, Erwin; Franx, Arie; Bonsel, Gouke J.
2015-01-01
Background Maternity care is an integrated care process, which consists of different services, involves different professionals and covers different time windows. To measure performance of maternity care based on clients' experiences, we developed and validated a questionnaire. Methods and Findings We used the 8-domain WHO Responsiveness model, and previous materials to develop a self-report questionnaire. A dual study design was used for development and validation. Content validity of the ReproQ-version-0 was determined through structured interviews with 11 pregnant women (≥28 weeks), 10 women who recently had given birth (≤12 weeks), and 19 maternity care professionals. Structured interviews established the domain relevance to the women; all items were separately commented on. All Responsiveness domains were judged relevant, with Dignity and Communication ranking highest. Main missing topic was the assigned expertise of the health professional. After first adaptation, construct validity of the ReproQ-version-1 was determined through a web-based survey. Respondents were approached by maternity care organizations with different levels of integration of services of midwives and obstetricians. We sent questionnaires to 605 third trimester pregnant women (response 65%), and 810 women 6 weeks after delivery (response 55%). Construct validity was based on: response patterns; exploratory factor analysis; association of the overall score with a Visual Analogue Scale (VAS), known group comparisons. Median overall ReproQ score was 3.70 (range 1–4) showing good responsiveness. The exploratory factor analysis supported the assumed domain structure and suggested several adaptations. Correlation of the VAS rating and overall ReproQ score (antepartum, postpartum) supported validity (r = 0.56; 0.59, p<0.001 Spearman's correlation coefficient). Pre-stated group comparisons confirmed the expected difference following a good vs. adverse birth outcome. Fully integrated organizations performed slightly better (median = 3.78) than less integrated organizations (median = 3.63; p<0.001). Participation rate of women with a low educational level and/or a non-western origin was low. Conclusions The ReproQ appears suitable for assessing quality of maternity care from the clients' perspective. Recruitment of disadvantaged groups requires additional non-digital approaches. PMID:25671310
Four experimental demonstrations of active vibration control for flexible structures
NASA Technical Reports Server (NTRS)
Phillips, Doug; Collins, Emmanuel G., Jr.
1990-01-01
Laboratory experiments designed to test prototype active-vibration-control systems under development for future flexible space structures are described, summarizing previously reported results. The control-synthesis technique employed for all four experiments was the maximum-entropy optimal-projection (MEOP) method (Bernstein and Hyland, 1988). Consideration is given to: (1) a pendulum experiment on large-amplitude LF dynamics; (2) a plate experiment on broadband vibration suppression in a two-dimensional structure; (3) a multiple-hexagon experiment combining the factors studied in (1) and (2) to simulate the complexity of a large space structure; and (4) the NASA Marshall ACES experiment on a lightweight deployable 45-foot beam. Extensive diagrams, drawings, graphs, and photographs are included. The results are shown to validate the MEOP design approach, demonstrating that good performance is achievable using relatively simple low-order decentralized controllers.
OPTIMAL EXPERIMENT DESIGN FOR MAGNETIC RESONANCE FINGERPRINTING
Zhao, Bo; Haldar, Justin P.; Setsompop, Kawin; Wald, Lawrence L.
2017-01-01
Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance. PMID:28268369
Optimal experiment design for magnetic resonance fingerprinting.
Bo Zhao; Haldar, Justin P; Setsompop, Kawin; Wald, Lawrence L
2016-08-01
Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance.
National Transonic Facility Wall Pressure Calibration Using Modern Design of Experiments (Invited)
NASA Technical Reports Server (NTRS)
Underwood, Pamela J.; Everhart, Joel L.; DeLoach, Richard
2001-01-01
The Modern Design of Experiments (MDOE) has been applied to wind tunnel testing at NASA Langley Research Center for several years. At Langley, MDOE has proven to be a useful and robust approach to aerodynamic testing that yields significant reductions in the cost and duration of experiments while still providing for the highest quality research results. This paper extends its application to include empty tunnel wall pressure calibrations. These calibrations are performed in support of wall interference corrections. This paper will present the experimental objectives, and the theoretical design process. To validate the tunnel-empty-calibration experiment design, preliminary response surface models calculated from previously acquired data are also presented. Finally, lessons learned and future wall interference applications of MDOE are discussed.
In Vitro Simulation and Validation of the Circulation with Congenital Heart Defects
Figliola, Richard S.; Giardini, Alessandro; Conover, Tim; Camp, Tiffany A.; Biglino, Giovanni; Chiulli, John; Hsia, Tain-Yen
2010-01-01
Despite the recent advances in computational modeling, experimental simulation of the circulation with congenital heart defect using mock flow circuits remains an important tool for device testing, and for detailing the probable flow consequences resulting from surgical and interventional corrections. Validated mock circuits can be applied to qualify the results from novel computational models. New mathematical tools, coupled with advanced clinical imaging methods, allow for improved assessment of experimental circuit performance relative to human function, as well as the potential for patient-specific adaptation. In this review, we address the development of three in vitro mock circuits specific for studies of congenital heart defects. Performance of an in vitro right heart circulation circuit through a series of verification and validation exercises is described, including correlations with animal studies, and quantifying the effects of circuit inertiance on test results. We present our experience in the design of mock circuits suitable for investigations of the characteristics of the Fontan circulation. We use one such mock circuit to evaluate the accuracy of Doppler predictions in the presence of aortic coarctation. PMID:21218147
Summary of BISON Development and Validation Activities - NEAMS FY16 Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, R. L.; Pastore, G.; Gamble, K. A.
This summary report contains an overview of work performed under the work package en- titled “FY2016 NEAMS INL-Engineering Scale Fuel Performance (BISON)” A first chapter identifies the specific FY-16 milestones, providing a basic description of the associated work and references to related detailed documentation. Where applicable, a representative technical result is provided. A second chapter summarizes major additional accomplishments, which in- clude: 1) publication of a journal article on solution verification and validation of BISON for LWR fuel, 2) publication of a journal article on 3D Missing Pellet Surface (MPS) analysis of BWR fuel, 3) use of BISON to designmore » a unique 3D MPS validation experiment for future in- stallation in the Halden research reactor, 4) participation in an OECD benchmark on Pellet Clad Mechanical Interaction (PCMI), 5) participation in an OECD benchmark on Reactivity Insertion Accident (RIA) analysis, 6) participation in an OECD activity on uncertainity quantification and sensitivity analysis in nuclear fuel modeling and 7) major improvements to BISON’s fission gas behavior models. A final chapter outlines FY-17 future work.« less
Verification and Validation of the BISON Fuel Performance Code for PCMI Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamble, Kyle Allan Lawrence; Novascone, Stephen Rhead; Gardner, Russell James
2016-06-01
BISON is a modern finite element-based nuclear fuel performance code that has been under development at Idaho National Laboratory (INL) since 2009. The code is applicable to both steady and transient fuel behavior and has been used to analyze a variety of fuel forms in 1D spherical, 2D axisymmetric, or 3D geometries. A brief overview of BISON’s computational framework, governing equations, and general material and behavioral models is provided. BISON code and solution verification procedures are described. Validation for application to light water reactor (LWR) PCMI problems is assessed by comparing predicted and measured rod diameter following base irradiation andmore » power ramps. Results indicate a tendency to overpredict clad diameter reduction early in life, when clad creepdown dominates, and more significantly overpredict the diameter increase late in life, when fuel expansion controls the mechanical response. Initial rod diameter comparisons have led to consideration of additional separate effects experiments to better understand and predict clad and fuel mechanical behavior. Results from this study are being used to define priorities for ongoing code development and validation activities.« less
Metrological analysis of a virtual flowmeter-based transducer for cryogenic helium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arpaia, P., E-mail: pasquale.arpaia@unina.it; Technology Department, European Organization for Nuclear Research; Girone, M., E-mail: mario.girone@cern.ch
2015-12-15
The metrological performance of a virtual flowmeter-based transducer for monitoring helium under cryogenic conditions is assessed. At this aim, an uncertainty model of the transducer, mainly based on a valve model, exploiting finite-element approach, and a virtual flowmeter model, based on the Sereg-Schlumberger method, are presented. The models are validated experimentally on a case study for helium monitoring in cryogenic systems at the European Organization for Nuclear Research (CERN). The impact of uncertainty sources on the transducer metrological performance is assessed by a sensitivity analysis, based on statistical experiment design and analysis of variance. In this way, the uncertainty sourcesmore » most influencing metrological performance of the transducer are singled out over the input range as a whole, at varying operating and setting conditions. This analysis turns out to be important for CERN cryogenics operation because the metrological design of the transducer is validated, and its components and working conditions with critical specifications for future improvements are identified.« less
Assessment of simulation fidelity using measurements of piloting technique in flight
NASA Technical Reports Server (NTRS)
Clement, W. F.; Cleveland, W. B.; Key, D. L.
1984-01-01
The U.S. Army and NASA joined together on a project to conduct a systematic investigation and validation of a ground based piloted simulation of the Army/Sikorsky UH-60A helicopter. Flight testing was an integral part of the validation effort. Nap-of-the-Earth (NOE) piloting tasks which were investigated included the bob-up, the hover turn, the dash/quickstop, the sidestep, the dolphin, and the slalom. Results from the simulation indicate that the pilot's NOE task performance in the simulator is noticeably and quantifiably degraded when compared with the task performance results generated in flight test. The results of the flight test and ground based simulation experiments support a unique rationale for the assessment of simulation fidelity: flight simulation fidelity should be judged quantitatively by measuring pilot's control strategy and technique as induced by the simulator. A quantitative comparison is offered between the piloting technique observed in a flight simulator and that observed in flight test for the same tasks performed by the same pilots.
Radiocardiography in clinical cardiology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierson, R.N. Jr.; Alam, S.; Kemp, H.G.
1977-01-01
Quantitative radiocardiography provides a variety of noninvasive measurements of value in cardiology. A gamma camera and computer processing are required for most of these measurements. The advantages of ease, economy, and safety of these procedures are, in part, offset by the complexity of as yet unstandardized methods and incomplete validation of results. The expansion of these techniques will inevitably be rapid. Their careful performance requires, for the moment, a major and perhaps dedicated effort by at least one member of the professional team, if the pitfalls that lead to unrecognized error are to be avoided. We may anticipate more automatedmore » and reliable results with increased experience and validation.« less
Observational and Modeling Studies of Clouds and the Hydrological Cycle
NASA Technical Reports Server (NTRS)
Somerville, Richard C. J.
1997-01-01
Our approach involved validating parameterizations directly against measurements from field programs, and using this validation to tune existing parameterizations and to guide the development of new ones. We have used a single-column model (SCM) to make the link between observations and parameterizations of clouds, including explicit cloud microphysics (e.g., prognostic cloud liquid water used to determine cloud radiative properties). Surface and satellite radiation measurements were used to provide an initial evaluation of the performance of the different parameterizations. The results of this evaluation will then used to develop improved cloud and cloud-radiation schemes, which were tested in GCM experiments.
A statistical method (cross-validation) for bone loss region detection after spaceflight
Zhao, Qian; Li, Wenjun; Li, Caixia; Chu, Philip W.; Kornak, John; Lang, Thomas F.
2010-01-01
Astronauts experience bone loss after the long spaceflight missions. Identifying specific regions that undergo the greatest losses (e.g. the proximal femur) could reveal information about the processes of bone loss in disuse and disease. Methods for detecting such regions, however, remains an open problem. This paper focuses on statistical methods to detect such regions. We perform statistical parametric mapping to get t-maps of changes in images, and propose a new cross-validation method to select an optimum suprathreshold for forming clusters of pixels. Once these candidate clusters are formed, we use permutation testing of longitudinal labels to derive significant changes. PMID:20632144
Memory colours and colour quality evaluation of conventional and solid-state lamps.
Smet, Kevin A G; Ryckaert, Wouter R; Pointer, Michael R; Deconinck, Geert; Hanselaer, Peter
2010-12-06
A colour quality metric based on memory colours is presented. The basic idea is simple. The colour quality of a test source is evaluated as the degree of similarity between the colour appearance of a set of familiar objects and their memory colours. The closer the match, the better the colour quality. This similarity was quantified using a set of similarity distributions obtained by Smet et al. in a previous study. The metric was validated by calculating the Pearson and Spearman correlation coefficients between the metric predictions and the visual appreciation results obtained in a validation experiment conducted by the authors as well those obtained in two independent studies. The metric was found to correlate well with the visual appreciation of the lighting quality of the sources used in the three experiments. Its performance was also compared with that of the CIE colour rendering index and the NIST colour quality scale. For all three experiments, the metric was found to be significantly better at predicting the correct visual rank order of the light sources (p < 0.1).
Experimental validation of photon-heating calculation for the Jules Horowitz Reactor
NASA Astrophysics Data System (ADS)
Lemaire, M.; Vaglio-Gaudard, C.; Lyoussi, A.; Reynard-Carette, C.; Di Salvo, J.; Gruel, A.
2015-04-01
The Jules Horowitz Reactor (JHR) is the next Material-Testing Reactor (MTR) under construction at CEA Cadarache. High values of photon heating (up to 20 W/g) are expected in this MTR. As temperature is a key parameter for material behavior, the accuracy of photon-heating calculation in the different JHR structures is an important stake with regard to JHR safety and performances. In order to experimentally validate the calculation of photon heating in the JHR, an integral experiment called AMMON was carried out in the critical mock-up EOLE at CEA Cadarache to help ascertain the calculation bias and its associated uncertainty. Nuclear heating was measured in different JHR-representative AMMON core configurations using ThermoLuminescent Detectors (TLDs) and Optically Stimulated Luminescent Detectors (OSLDs). This article presents the interpretation methodology and the calculation/experiment (C/E) ratio for all the TLD and OSLD measurements conducted in AMMON. It then deals with representativeness elements of the AMMON experiment regarding the JHR and establishes the calculation biases (and its associated uncertainty) applicable to photon-heating calculation for the JHR.
Complex terrain experiments in the New European Wind Atlas.
Mann, J; Angelou, N; Arnqvist, J; Callies, D; Cantero, E; Arroyo, R Chávez; Courtney, M; Cuxart, J; Dellwik, E; Gottschall, J; Ivanell, S; Kühn, P; Lea, G; Matos, J C; Palma, J M L M; Pauscher, L; Peña, A; Rodrigo, J Sanz; Söderberg, S; Vasiljevic, N; Rodrigues, C Veiga
2017-04-13
The New European Wind Atlas project will create a freely accessible wind atlas covering Europe and Turkey, develop the model chain to create the atlas and perform a series of experiments on flow in many different kinds of complex terrain to validate the models. This paper describes the experiments of which some are nearly completed while others are in the planning stage. All experiments focus on the flow properties that are relevant for wind turbines, so the main focus is the mean flow and the turbulence at heights between 40 and 300 m. Also extreme winds, wind shear and veer, and diurnal and seasonal variations of the wind are of interest. Common to all the experiments is the use of Doppler lidar systems to supplement and in some cases replace completely meteorological towers. Many of the lidars will be equipped with scan heads that will allow for arbitrary scan patterns by several synchronized systems. Two pilot experiments, one in Portugal and one in Germany, show the value of using multiple synchronized, scanning lidar, both in terms of the accuracy of the measurements and the atmospheric physical processes that can be studied. The experimental data will be used for validation of atmospheric flow models and will by the end of the project be freely available.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Authors.
Complex terrain experiments in the New European Wind Atlas
Angelou, N.; Callies, D.; Cantero, E.; Arroyo, R. Chávez; Courtney, M.; Cuxart, J.; Dellwik, E.; Gottschall, J.; Ivanell, S.; Kühn, P.; Lea, G.; Matos, J. C.; Palma, J. M. L. M.; Peña, A.; Rodrigo, J. Sanz; Söderberg, S.; Vasiljevic, N.; Rodrigues, C. Veiga
2017-01-01
The New European Wind Atlas project will create a freely accessible wind atlas covering Europe and Turkey, develop the model chain to create the atlas and perform a series of experiments on flow in many different kinds of complex terrain to validate the models. This paper describes the experiments of which some are nearly completed while others are in the planning stage. All experiments focus on the flow properties that are relevant for wind turbines, so the main focus is the mean flow and the turbulence at heights between 40 and 300 m. Also extreme winds, wind shear and veer, and diurnal and seasonal variations of the wind are of interest. Common to all the experiments is the use of Doppler lidar systems to supplement and in some cases replace completely meteorological towers. Many of the lidars will be equipped with scan heads that will allow for arbitrary scan patterns by several synchronized systems. Two pilot experiments, one in Portugal and one in Germany, show the value of using multiple synchronized, scanning lidar, both in terms of the accuracy of the measurements and the atmospheric physical processes that can be studied. The experimental data will be used for validation of atmospheric flow models and will by the end of the project be freely available. This article is part of the themed issue ‘Wind energy in complex terrains’. PMID:28265025
Optimal test selection for prediction uncertainty reduction
Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel
2016-12-02
Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecisemore » data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.« less
Yamaguchi, Shohei; Konishi, Kozo; Yasunaga, Takefumi; Yoshida, Daisuke; Kinjo, Nao; Kobayashi, Kiichiro; Ieiri, Satoshi; Okazaki, Ken; Nakashima, Hideaki; Tanoue, Kazuo; Maehara, Yoshihiko; Hashizume, Makoto
2007-12-01
This study was carried out to investigate whether eye-hand coordination skill on a virtual reality laparoscopic surgical simulator (the LAP Mentor) was able to differentiate among subjects with different laparoscopic experience and thus confirm its construct validity. A total of 31 surgeons, who were all right-handed, were divided into the following two groups according to their experience as an operator in laparoscopic surgery: experienced surgeons (more than 50 laparoscopic procedures) and novice surgeons (fewer than 10 laparoscopic procedures). The subjects were tested using the eye-hand coordination task of the LAP Mentor, and performance was compared between the two groups. Assessment of the laparoscopic skills was based on parameters measured by the simulator. The experienced surgeons completed the task significantly faster than the novice surgeons. The experienced surgeons also achieved a lower number of movements (NOM), better economy of movement (EOM) and faster average speed of the left instrument than the novice surgeons, whereas there were no significant differences between the two groups for the NOM, EOM and average speed of the right instrument. Eye-hand coordination skill of the nondominant hand, but not the dominant hand, measured using the LAP Mentor was able to differentiate between subjects with different laparoscopic experience. This study also provides evidence of construct validity for eye-hand coordination skill on the LAP Mentor.
Khanduja, P Kristina; Bould, M Dylan; Naik, Viren N; Hladkowicz, Emily; Boet, Sylvain
2015-01-01
We systematically reviewed the effectiveness of simulation-based education, targeting independently practicing qualified physicians in acute care specialties. We also describe how simulation is used for performance assessment in this population. Data source included: DataMEDLINE, Embase, Cochrane Database of Systematic Reviews, Cochrane CENTRAL Database of Controlled Trials, and National Health Service Economic Evaluation Database. The last date of search was January 31, 2013. All original research describing simulation-based education for independently practicing physicians in anesthesiology, critical care, and emergency medicine was reviewed. Data analysis was performed in duplicate with further review by a third author in cases of disagreement until consensus was reached. Data extraction was focused on effectiveness according to Kirkpatrick's model. For simulation-based performance assessment, tool characteristics and sources of validity evidence were also collated. Of 39 studies identified, 30 studies focused on the effectiveness of simulation-based education and nine studies evaluated the validity of simulation-based assessment. Thirteen studies (30%) targeted the lower levels of Kirkpatrick's hierarchy with reliance on self-reporting. Simulation was unanimously described as a positive learning experience with perceived impact on clinical practice. Of the 17 remaining studies, 10 used a single group or "no intervention comparison group" design. The majority (n = 17; 44%) were able to demonstrate both immediate and sustained improvements in educational outcomes. Nine studies reported the psychometric properties of simulation-based performance assessment as their sole objective. These predominantly recruited independent practitioners as a convenience sample to establish whether the tool could discriminate between experienced and inexperienced operators and concentrated on a single aspect of validity evidence. Simulation is perceived as a positive learning experience with limited evidence to support improved learning. Future research should focus on the optimal modality and frequency of exposure, quality of assessment tools and on the impact of simulation-based education beyond the individuals toward improved patient care.
Skill Assessment in the Interpretation of 3D Fracture Patterns from Radiographs
Rojas-Murillo, Salvador; Hanley, Jessica M; Kreiter, Clarence D; Karam, Matthew D; Anderson, Donald D
2016-01-01
Abstract Background Interpreting two-dimensional radiographs to ascertain the three-dimensional (3D) position and orientation of fracture planes and bone fragments is an important component of orthopedic diagnosis and clinical management. This skill, however, has not been thoroughly explored and measured. Our primary research question is to determine if 3D radiographic image interpretation can be reliably assessed, and whether this assessment varies by level of training. A test designed to measure this skill among orthopedic surgeons would provide a quantitative benchmark for skill assessment and training research. Methods Two tests consisting of a series of online exercises were developed to measure this skill. Each exercise displayed a pair of musculoskeletal radiographs. Participants selected one of three CT slices of the same or similar fracture patterns that best matched the radiographs. In experiment 1, 10 orthopedic residents and staff responded to nine questions. In experiment 2, 52 residents from both orthopedics and radiology responded to 12 questions. Results Experiment 1 yielded a Cronbach alpha of 0.47. Performance correlated with experience; r(8) = 0.87, p<0.01, suggesting that the test could be both valid and reliable with a slight increase in test length. In experiment 2, after removing three non-discriminating items, the Cronbach coefficient alpha was 0.28 and performance correlated with experience; r(50) = 0.25, p<0.10. Conclusions Although evidence for reliability and validity was more compelling with the first experiment, the analyses suggest motivation and test duration are important determinants of test efficacy. The interpretation of radiographs to discern 3D information is a promising and a relatively unexplored area for surgical skill education and assessment. The online test was useful and reliable. Further test development is likely to increase test effectiveness. Clinical Relevance Accurately interpreting radiographic images is an essential clinical skill. Quantitative, repeatable techniques to measure this skill can improve resident training and improve patient safety. PMID:27528827
NASA Astrophysics Data System (ADS)
Efthimiou, G. C.; Andronopoulos, S.; Bartzis, J. G.
2018-02-01
One of the key issues of recent research on the dispersion inside complex urban environments is the ability to predict dosage-based parameters from the puff release of an airborne material from a point source in the atmospheric boundary layer inside the built-up area. The present work addresses the question of whether the computational fluid dynamics (CFD)-Reynolds-averaged Navier-Stokes (RANS) methodology can be used to predict ensemble-average dosage-based parameters that are related with the puff dispersion. RANS simulations with the ADREA-HF code were, therefore, performed, where a single puff was released in each case. The present method is validated against the data sets from two wind-tunnel experiments. In each experiment, more than 200 puffs were released from which ensemble-averaged dosage-based parameters were calculated and compared to the model's predictions. The performance of the model was evaluated using scatter plots and three validation metrics: fractional bias, normalized mean square error, and factor of two. The model presented a better performance for the temporal parameters (i.e., ensemble-average times of puff arrival, peak, leaving, duration, ascent, and descent) than for the ensemble-average dosage and peak concentration. The majority of the obtained values of validation metrics were inside established acceptance limits. Based on the obtained model performance indices, the CFD-RANS methodology as implemented in the code ADREA-HF is able to predict the ensemble-average temporal quantities related to transient emissions of airborne material in urban areas within the range of the model performance acceptance criteria established in the literature. The CFD-RANS methodology as implemented in the code ADREA-HF is also able to predict the ensemble-average dosage, but the dosage results should be treated with some caution; as in one case, the observed ensemble-average dosage was under-estimated slightly more than the acceptance criteria. Ensemble-average peak concentration was systematically underpredicted by the model to a degree higher than the allowable by the acceptance criteria, in 1 of the 2 wind-tunnel experiments. The model performance depended on the positions of the examined sensors in relation to the emission source and the buildings configuration. The work presented in this paper was carried out (partly) within the scope of COST Action ES1006 "Evaluation, improvement, and guidance for the use of local-scale emergency prediction and response tools for airborne hazards in built environments".
Morin, Ruth T; Axelrod, Bradley N
Latent Class Analysis (LCA) was used to classify a heterogeneous sample of neuropsychology data. In particular, we used measures of performance validity, symptom validity, cognition, and emotional functioning to assess and describe latent groups of functioning in these areas. A data-set of 680 neuropsychological evaluation protocols was analyzed using a LCA. Data were collected from evaluations performed for clinical purposes at an urban medical center. A four-class model emerged as the best fitting model of latent classes. The resulting classes were distinct based on measures of performance validity and symptom validity. Class A performed poorly on both performance and symptom validity measures. Class B had intact performance validity and heightened symptom reporting. The remaining two Classes performed adequately on both performance and symptom validity measures, differing only in cognitive and emotional functioning. In general, performance invalidity was associated with worse cognitive performance, while symptom invalidity was associated with elevated emotional distress. LCA appears useful in identifying groups within a heterogeneous sample with distinct performance patterns. Further, the orthogonal nature of performance and symptom validities is supported.
Nicola, Kristy; Waugh, Jemimah; Charles, Emily; Russell, Trevor
2018-06-01
In rural and remote communities children with motor difficulties have less access to rehabilitation services. Telerehabilitation technology is a potential method to overcome barriers restricting access to healthcare in these areas. Assessment is necessary to guide clinical reasoning; however it is unclear which paediatric assessments can be administered remotely. The Movement Assessment Battery for Children - 2nd Edition is commonly used by various health professionals to assess motor performance of children. The aim of this study was to investigate the feasibility and concurrent validity of performing the Movement Assessment Battery for Children - 2nd Edition remotely via telerehabilitation technology compared to the conventional in-person method. Fifty-nine children enrolled in a state school (5-11 years old) volunteered to perform one in-person and one telerehabilitation mediated assessment. The order of the method of delivery and the therapist performing the assessment were randomized. After both assessments were complete, a participant satisfaction questionnaire was completed by each child. The Bland-Altman limits of agreement for the total test standard score were -3.15 to 3.22 which is smaller than a pre-determined clinically acceptable margin based on the smallest detectable change. This study establishes the feasibility and concurrent validity of the administration of the Movement Assessment Battery for Children - 2nd Edition via telerehabilitation technology. Overall, participants perceived their experience with telerehabilitation positively. Copyright © 2018 Elsevier Ltd. All rights reserved.
Virtual temporal bone dissection system: OSU virtual temporal bone system: development and testing.
Wiet, Gregory J; Stredney, Don; Kerwin, Thomas; Hittle, Bradley; Fernandez, Soledad A; Abdel-Rasoul, Mahmoud; Welling, D Bradley
2012-03-01
The objective of this project was to develop a virtual temporal bone dissection system that would provide an enhanced educational experience for the training of otologic surgeons. A randomized, controlled, multi-institutional, single-blinded validation study. The project encompassed four areas of emphasis: structural data acquisition, integration of the system, dissemination of the system, and validation. Structural acquisition was performed on multiple imaging platforms. Integration achieved a cost-effective system. Dissemination was achieved on different levels including casual interest, downloading of software, and full involvement in development and validation studies. A validation study was performed at eight different training institutions across the country using a two-arm randomized trial where study subjects were randomized to a 2-week practice session using either the virtual temporal bone or standard cadaveric temporal bones. Eighty subjects were enrolled and randomized to one of the two treatment arms; 65 completed the study. There was no difference between the two groups using a blinded rating tool to assess performance after training. A virtual temporal bone dissection system has been developed and compared to cadaveric temporal bones for practice using a multicenter trial. There was no statistical difference between practice on the current simulator compared to practice on human cadaveric temporal bones. Further refinements in structural acquisition and interface design have been identified, which can be implemented prior to full incorporation into training programs and used for objective skills assessment. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.
Timmer, M A; Gouw, S C; Feldman, B M; Zwagemaker, A; de Kleijn, P; Pisters, M F; Schutgens, R E G; Blanchette, V; Srivastava, A; David, J A; Fischer, K; van der Net, J
2018-03-01
Monitoring clinical outcome in persons with haemophilia (PWH) is essential in order to provide optimal treatment for individual patients and compare effectiveness of treatment strategies. Experience with measurement of activities and participation in haemophilia is limited and consensus on preferred tools is lacking. The aim of this study was to give a comprehensive overview of the measurement properties of a selection of commonly used tools developed to assess activities and participation in PWH. Electronic databases were searched for articles that reported on reliability, validity or responsiveness of predetermined measurement tools (5 self-reported and 4 performance based measurement tools). Methodological quality of the studies was assessed according to the COSMIN checklist. Best evidence synthesis was used to summarize evidence on the measurement properties. The search resulted in 3453 unique hits. Forty-two articles were included. The self-reported Haemophilia Acitivity List (HAL), Pediatric HAL (PedHAL) and the performance based Functional Independence Score in Haemophilia (FISH) were studied most extensively. Methodological quality of the studies was limited. Measurement error, cross-cultural validity and responsiveness have been insufficiently evaluated. Albeit based on limited evidence, the measurement properties of the PedHAL, HAL and FISH are currently considered most satisfactory. Further research needs to focus on measurement error, responsiveness, interpretability and cross-cultural validity of the self-reported tools and validity of performance based tools which are able to assess limitations in sports and leisure activities. © 2018 The Authors. Haemophilia Published by John Wiley & Sons Ltd.
The Design of PSB-VVER Experiments Relevant to Accident Management
NASA Astrophysics Data System (ADS)
Nevo, Alessandro Del; D'Auria, Francesco; Mazzini, Marino; Bykov, Michael; Elkin, Ilya V.; Suslov, Alexander
Experimental programs carried-out in integral test facilities are relevant for validating the best estimate thermal-hydraulic codes(1), which are used for accident analyses, design of accident management procedures, licensing of nuclear power plants, etc. The validation process, in fact, is based on well designed experiments. It consists in the comparison of the measured and calculated parameters and the determination whether a computer code has an adequate capability in predicting the major phenomena expected to occur in the course of transient and/or accidents. University of Pisa was responsible of the numerical design of the 12 experiments executed in PSB-VVER facility (2), operated at Electrogorsk Research and Engineering Center (Russia), in the framework of the TACIS 2.03/97 Contract 3.03.03 Part A, EC financed (3). The paper describes the methodology adopted at University of Pisa, starting form the scenarios foreseen in the final test matrix until the execution of the experiments. This process considers three key topics: a) the scaling issue and the simulation, with unavoidable distortions, of the expected performance of the reference nuclear power plants; b) the code assessment process involving the identification of phenomena challenging the code models; c) the features of the concerned integral test facility (scaling limitations, control logics, data acquisition system, instrumentation, etc.). The activities performed in this respect are discussed, and emphasis is also given to the relevance of the thermal losses to the environment. This issue affects particularly the small scaled facilities and has relevance on the scaling approach related to the power and volume of the facility.
Barrett, Frederick S; Johnson, Matthew W; Griffiths, Roland R
2015-11-01
The 30-item revised Mystical Experience Questionnaire (MEQ30) was previously developed within an online survey of mystical-type experiences occasioned by psilocybin-containing mushrooms. The rated experiences occurred on average eight years before completion of the questionnaire. The current paper validates the MEQ30 using data from experimental studies with controlled doses of psilocybin. Data were pooled and analyzed from five laboratory experiments in which participants (n=184) received a moderate to high oral dose of psilocybin (at least 20 mg/70 kg). Results of confirmatory factor analysis demonstrate the reliability and internal validity of the MEQ30. Structural equation models demonstrate the external and convergent validity of the MEQ30 by showing that latent variable scores on the MEQ30 positively predict persisting change in attitudes, behavior, and well-being attributed to experiences with psilocybin while controlling for the contribution of the participant-rated intensity of drug effects. These findings support the use of the MEQ30 as an efficient measure of individual mystical experiences. A method to score a "complete mystical experience" that was used in previous versions of the mystical experience questionnaire is validated in the MEQ30, and a stand-alone version of the MEQ30 is provided for use in future research. © The Author(s) 2015.
Systemic Lisbon Battery: Normative Data for Memory and Attention Assessments.
Gamito, Pedro; Morais, Diogo; Oliveira, Jorge; Ferreira Lopes, Paulo; Picareli, Luís Felipe; Matias, Marcelo; Correia, Sara; Brito, Rodrigo
2016-05-04
Memory and attention are two cognitive domains pivotal for the performance of instrumental activities of daily living (IADLs). The assessment of these functions is still widely carried out with pencil-and-paper tests, which lack ecological validity. The evaluation of cognitive and memory functions while the patients are performing IADLs should contribute to the ecological validity of the evaluation process. The objective of this study is to establish normative data from virtual reality (VR) IADLs designed to activate memory and attention functions. A total of 243 non-clinical participants carried out a paper-and-pencil Mini-Mental State Examination (MMSE) and performed 3 VR activities: art gallery visual matching task, supermarket shopping task, and memory fruit matching game. The data (execution time and errors, and money spent in the case of the supermarket activity) was automatically generated from the app. Outcomes were computed using non-parametric statistics, due to non-normality of distributions. Age, academic qualifications, and computer experience all had significant effects on most measures. Normative values for different levels of these measures were defined. Age, academic qualifications, and computer experience should be taken into account while using our VR-based platform for cognitive assessment purposes. ©Pedro Gamito, Diogo Morais, Jorge Oliveira, Paulo Ferreira Lopes, Luís Felipe Picareli, Marcelo Matias, Sara Correia, Rodrigo Brito. Originally published in JMIR Rehabilitation and Assistive Technology (http://rehab.jmir.org), 04.05.2016.
Bonfilio, Rudy; Tarley, César Ricardo Teixeira; Pereira, Gislaine Ribeiro; Salgado, Hérida Regina Nunes; de Araújo, Magali Benjamim
2009-11-15
This paper describes the optimization and validation of an analytical methodology for the determination of losartan potassium in capsules by HPLC using 2(5-1) fractional factorial and Doehlert designs. This multivariate approach allows a considerable improvement in chromatographic performance using fewer experiments, without additional cost for columns or other equipment. The HPLC method utilized potassium phosphate buffer (pH 6.2; 58 mmol L(-1))-acetonitrile (65:35, v/v) as the mobile phase, pumped at a flow rate of 1.0 mL min(-1). An octylsilane column (100 mm x 4.6mm i.d., 5 microm) maintained at 35 degrees C was used as the stationary phase. UV detection was performed at 254 nm. The method was validated according to the ICH guidelines, showing accuracy, precision (intra-day relative standard deviation (R.S.D.) and inter-day R.S.D values <2.0%), selectivity, robustness and linearity (r=0.9998) over a concentration range from 30 to 70 mg L(-1) of losartan potassium. The limits of detection and quantification were 0.114 and 0.420 mg L(-1), respectively. The validated method may be used to quantify losartan potassium in capsules and to determine the stability of this drug.
Physician groups' use of data from patient experience surveys.
Friedberg, Mark W; SteelFisher, Gillian K; Karp, Melinda; Schneider, Eric C
2011-05-01
In Massachusetts, physician groups' performance on validated surveys of patient experience has been publicly reported since 2006. Groups also receive detailed reports of their own performance, but little is known about how physician groups have responded to these reports. To examine whether and how physician groups are using patient experience data to improve patient care. During 2008, we conducted semi-structured interviews with the leaders of 72 participating physician groups (out of 117 groups receiving patient experience reports). Based on leaders' responses, we identified three levels of engagement with patient experience reporting: no efforts to improve (level 1), efforts to improve only the performance of low-scoring physicians or practice sites (level 2), and efforts to improve group-wide performance (level 3). Groups' level of engagement and specific efforts to improve patient care. Forty-four group leaders (61%) reported group-wide improvement efforts (level 3), 16 (22%) reported efforts to improve only the performance of low-scoring physicians or practice sites (level 2), and 12 (17%) reported no performance improvement efforts (level 1). Level 3 groups were more likely than others to have an integrated medical group organizational model (84% vs. 31% at level 2 and 33% at level 1; P < 0.005) and to employ the majority of their physicians (69% vs. 25% and 20%; P < 0.05). Among level 3 groups, the most common targets for improvement were access, communication with patients, and customer service. The most commonly reported improvement initiatives were changing office workflow, providing additional training for nonclinical staff, and adopting or enhancing an electronic health record. Despite statewide public reporting, physician groups' use of patient experience data varied widely. Integrated organizational models were associated with greater engagement, and efforts to enhance clinicians' interpersonal skills were uncommon, with groups predominantly focusing on office workflow and support staff.
Performance analysis of jump-gliding locomotion for miniature robotics.
Vidyasagar, A; Zufferey, Jean-Christohphe; Floreano, Dario; Kovač, M
2015-03-26
Recent work suggests that jumping locomotion in combination with a gliding phase can be used as an effective mobility principle in robotics. Compared to pure jumping without a gliding phase, the potential benefits of hybrid jump-gliding locomotion includes the ability to extend the distance travelled and reduce the potentially damaging impact forces upon landing. This publication evaluates the performance of jump-gliding locomotion and provides models for the analysis of the relevant dynamics of flight. It also defines a jump-gliding envelope that encompasses the range that can be achieved with jump-gliding robots and that can be used to evaluate the performance and improvement potential of jump-gliding robots. We present first a planar dynamic model and then a simplified closed form model, which allow for quantification of the distance travelled and the impact energy on landing. In order to validate the prediction of these models, we validate the model with experiments using a novel jump-gliding robot, named the 'EPFL jump-glider'. It has a mass of 16.5 g and is able to perform jumps from elevated positions, perform steered gliding flight, land safely and traverse on the ground by repetitive jumping. The experiments indicate that the developed jump-gliding model fits very well with the measured flight data using the EPFL jump-glider, confirming the benefits of jump-gliding locomotion to mobile robotics. The jump-glide envelope considerations indicate that the EPFL jump-glider, when traversing from a 2 m height, reaches 74.3% of optimal jump-gliding distance compared to pure jumping without a gliding phase which only reaches 33.4% of the optimal jump-gliding distance. Methods of further improving flight performance based on the models and inspiration from biological systems are presented providing mechanical design pathways to future jump-gliding robot designs.
Investigations on the performance of chevron type plate heat exchangers
NASA Astrophysics Data System (ADS)
Dutta, Oruganti Yaga; Nageswara Rao, B.
2018-01-01
This paper presents empirical relations for the chevron type plate heat exchangers (PHEs) and demonstrated their validity through comparison of test data of PHEs. In order to examine the performance of PHEs, the pressure drop(Δ P), the overall heat transfer coefficient ( U m ) and the effectiveness ( ɛ) are estimated by considering the properties of plate material and working fluid, number of plates ( N t ) and chevron angle( β). It is a known fact that, large surface area of the plate provides more rate of heat transfer ( \\dot{Q} ) thereby more effectiveness ( ɛ). However, there is a possibility to achieve the required performance by increasing the number of plates without altering the plate dimensions, which avoids the new design of the system. Application of the Taguchi's design of experiments is examined with less number of experiments and demonstrated by setting the levels for the parameters and compared the test data with the estimated output responses.
The Relationship Between Artificial and Second Language Learning.
Ettlinger, Marc; Morgan-Short, Kara; Faretta-Stutenberg, Mandy; Wong, Patrick C M
2016-05-01
Artificial language learning (ALL) experiments have become an important tool in exploring principles of language and language learning. A persistent question in all of this work, however, is whether ALL engages the linguistic system and whether ALL studies are ecologically valid assessments of natural language ability. In the present study, we considered these questions by examining the relationship between performance in an ALL task and second language learning ability. Participants enrolled in a Spanish language class were evaluated using a number of different measures of Spanish ability and classroom performance, which was compared to IQ and a number of different measures of ALL performance. The results show that success in ALL experiments, particularly more complex artificial languages, correlates positively with indices of L2 learning even after controlling for IQ. These findings provide a key link between studies involving ALL and our understanding of second language learning in the classroom. Copyright © 2015 Cognitive Science Society, Inc.
Off-design Performance Analysis of Multi-Stage Transonic Axial Compressors
NASA Astrophysics Data System (ADS)
Du, W. H.; Wu, H.; Zhang, L.
Because of the complex flow fields and component interaction in modern gas turbine engines, they require extensive experiment to validate performance and stability. The experiment process can become expensive and complex. Modeling and simulation of gas turbine engines are way to reduce experiment costs, provide fidelity and enhance the quality of essential experiment. The flow field of a transonic compressor contains all the flow aspects, which are difficult to present-boundary layer transition and separation, shock-boundary layer interactions, and large flow unsteadiness. Accurate transonic axial compressor off-design performance prediction is especially difficult, due in large part to three-dimensional blade design and the resulting flow field. Although recent advancements in computer capacity have brought computational fluid dynamics to forefront of turbomachinery design and analysis, the grid and turbulence model still limit Reynolds-average Navier-Stokes (RANS) approximations in the multi-stage transonic axial compressor flow field. Streamline curvature methods are still the dominant numerical approach as an important tool for turbomachinery to analyze and design, and it is generally accepted that streamline curvature solution techniques will provide satisfactory flow prediction as long as the losses, deviation and blockage are accurately predicted.
NASA Astrophysics Data System (ADS)
Sigaut, Lorena; Villarruel, Cecilia; Ponce, María Laura; Ponce Dawson, Silvina
2017-06-01
Many cell signaling pathways involve the diffusion of messengers that bind and unbind to and from intracellular components. Quantifying their net transport rate under different conditions then requires having separate estimates of their free diffusion coefficient and binding or unbinding rates. In this paper, we show how performing sets of fluorescence correlation spectroscopy (FCS) experiments under different conditions, it is possible to quantify free diffusion coefficients and on and off rates of reaction-diffusion systems. We develop the theory and present a practical implementation for the case of the universal second messenger, calcium (Ca2 +) and single-wavelength dyes that increase their fluorescence upon Ca2 + binding. We validate the approach with experiments performed in aqueous solutions containing Ca2 + and Fluo4 dextran (both in its high and low affinity versions). Performing FCS experiments with tetramethylrhodamine-dextran in Xenopus laevis oocytes, we infer the corresponding free diffusion coefficients in the cytosol of these cells. Our approach can be extended to other physiologically relevant reaction-diffusion systems to quantify biophysical parameters that determine the dynamics of various variables of interest.
von Dadelszen, Peter; Allaire, Catherine
2011-01-01
Background: Concern regarding the quality of surgical training in obstetrics and gynecology residency programs is focusing attention on competency based education. Because open surgical skills cannot necessarily be translated into laparoscopic skills and with minimally invasive surgery becoming standard in operative gynecology, the discrepancy in training between obstetrics and gynecology will widen. Training on surgical simulators with virtual reality may improve surgical skills. However, before incorporation into training programs for gynecology residents the validity of such instruments needs to first be established. We sought to prove the construct validity of a virtual reality laparoscopic simulator, the SurgicalSimTM, by showing its ability to distinguish between surgeons with different laparoscopic experience. Methods: Eleven gynecologic surgeons (experts) and 11 perinatologists (controls) completed 3 tasks on the simulator, and 10 performance parameters were compared. Results: The experts performed faster, more efficiently, and with fewer errors, proving the construct validity of the SurgicalSim. Conclusions: Laparoscopic virtual reality simulators can measure relevant surgical skills and so distinguish between subjects having different skill levels. Hence, these simulators could be integrated into gynecology resident endoscopic training and utilized for objective assessment. Second, the skills required for competency in obstetrics cannot necessarily be utilized for better performance in laparoscopic gynecology. PMID:21985726
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tommasi, J.; Ruggieri, J. M.; Lebrat, J. F.
The latest release (2.1) of the ERANOS code system, using JEF-2.2, JEFF-3.1 and ENDF/B-VI r8 multigroup cross-section libraries is currently being validated on fast reactor critical experiments at CEA-Cadarache (France). This paper briefly presents the library effect studies and the detailed best-estimate validation studies performed up to now as part of the validation process. The library effect studies are performed over a wide range of experimental configurations, using simple model and method options. They yield global trends about the shift from JEF-2.2 to JEFF-3.1 cross-section libraries, that can be related to individual sensitivities and cross-section changes. The more detailed, best-estimate,more » calculations have been performed up to now over three experimental configurations carried out in the MASURCA critical facility at CEA-Cadarache: two cores with a softened spectrum due to large amounts of graphite (MAS1A' and MAS1B), and a core representative of sodium-cooled fast reactors (CIRANO ZONA2A). Calculated values have been compared to measurements, and discrepancies analyzed in detail using perturbation theory. Values calculated with JEFF-3.1 were found to be within 3 standard deviations of the measured values, and at least of the same quality as the JEF-2.2 based results. (authors)« less